darcs

Issue 647 wish: automated benchmarking and comparison

Title wish: automated benchmarking and comparison
Priority feature Status resolved
Milestone Resolved in
Superseder Nosy List darcs-devel, dmitry.kurochkin, ertai, jaredj, kowey, markstos, thorkilnaur, tommy, zooko
Assigned To
Topics Performance

Created on 2008-02-06.17:24:00 by kowey, last changed 2009-10-24.00:04:43 by admin.

Messages
msg3157 (view) Author: kowey Date: 2008-02-06.17:23:58
It would be really useful to have a quick and dirty benchmarking script that we
can either distribute with or alongside darcs

The script would just encode some current darcs benchmarking practices: namely,
getting a repository, obliterating a 1000 patches and pulling them back.  Having
it do the best of N trials is probably a good idea too.  Automatically
summarising the results (% improvement) would be nice too.

Perhaps a good way to do it is to have it be parameterisable with two
directories (with darcs built in both) and a repository.  To avoid people having
to choose a repository, we could also distribute the GHC one as the prototypical
large-ish repository and hopefully set our sights higher one day.

Also, I think it's more important for us to have a basic version of this script
now, than a fancy one later :-)
msg3159 (view) Author: zooko Date: 2008-02-06.17:32:21
On Feb 6, 2008, at 10:24 AM, Eric Kow wrote:

> It would be really useful to have a quick and dirty benchmarking  
> script that we
> can either distribute with or alongside darcs

That's an excellent idea.

Regards,

Zooko
msg3160 (view) Author: dagit Date: 2008-02-06.17:33:22
On Feb 6, 2008 9:24 AM, Eric Kow <bugs@darcs.net> wrote:
>
> New submission from Eric Kow <eric.kow@gmail.com>:
>
> It would be really useful to have a quick and dirty benchmarking script that we
> can either distribute with or alongside darcs
>
> The script would just encode some current darcs benchmarking practices: namely,
> getting a repository, obliterating a 1000 patches and pulling them back.  Having
> it do the best of N trials is probably a good idea too.  Automatically
> summarising the results (% improvement) would be nice too.
>
> Perhaps a good way to do it is to have it be parameterisable with two
> directories (with darcs built in both) and a repository.  To avoid people having
> to choose a repository, we could also distribute the GHC one as the prototypical
> large-ish repository and hopefully set our sights higher one day.
>
> Also, I think it's more important for us to have a basic version of this script
> now, than a fancy one later :-)

I started making a fancy one as a class project (it's in scala) and
works okay, it just needs more usage patterns/benchmarks implemented.
It's actually quite simple and I spend most of the time on the project
just learning scala, so a Haskell re-implementation could probably be
made easily.  BTW, I see this as an argument for having a libdarcs.
It would be easier to create one-off versions of darcs for
benchmarking using a libdarcs than it is to create wrappers around
darcs.

Here is the link if you want to tear it apart or get ideas:
http://www.codersbase.com/index.php/DarcsSim

Probably the most useful thing you'll get out of it (if you don't use
it) is the way I use ghc's run-time to extract metrics.

Jason
msg3163 (view) Author: ertai Date: 2008-02-06.19:02:05
Excerpts from bugs's message of Wed Feb 06 17:24:02 UTC 2008:
> 
> New submission from Eric Kow <eric.kow@gmail.com>:
> 
> It would be really useful to have a quick and dirty benchmarking script that we
> can either distribute with or alongside darcs
> 
> The script would just encode some current darcs benchmarking practices: namely,
> getting a repository, obliterating a 1000 patches and pulling them back.  Having
> it do the best of N trials is probably a good idea too.  Automatically
> summarising the results (% improvement) would be nice too.
> 
> Perhaps a good way to do it is to have it be parameterisable with two
> directories (with darcs built in both) and a repository.  To avoid people having
> to choose a repository, we could also distribute the GHC one as the prototypical
> large-ish repository and hopefully set our sights higher one day.
> 
> Also, I think it's more important for us to have a basic version of this script
> now, than a fancy one later :-)

I'm  currently working on such a script. I'm playing with it in my spare time,
and it's not yet usable.
msg3174 (view) Author: markstos Date: 2008-02-07.04:39:06
I started on this about three years ago. You are welcome to re-use my work from
then:

http://lists.osuosl.org/pipermail/darcs-devel/2005-March/001524.html

http://mark.stosberg.com/darcs_hive/darcs_speed_test/

    Mark
msg3214 (view) Author: droundy Date: 2008-02-07.20:44:48
I should mention that I've found a couple of patterns useful in tracking down
performance bugs:

darcs obliterate --last 100 -a && darcs pull -a

darcs unrecord --last 1 -a && darcs record -a -m foo

It would be helpful for a benchmarking code to focus on inverse pairs like this,
because they're very easy to work with when tracking down code.

And a good start for a benchmarking framework would simply be something that
sets up a few repos and then runs the above on each of them.

David
msg3719 (view) Author: kowey Date: 2008-03-03.15:48:58
I guess now that we have something which sort of works, I should point out that
work has started on this.  

We are developing a Haskell benchmarking library called Maybench.  The project
also includes a darcs-specific wrapper (darcs-benchmark), which we are primarily
focusing on.

See http://code.google.com/p/maybench

Or darcs get http://code.haskell.org/maybench

This is what the output of maybench is on the yi repository; this is basically
what ertai mentioned he was working on, with a little bit of fiddling by me and
David.

------------------8<----------------------------------
Running... /usr/local/bin/darcs get --quiet /foo/yi/ main
Running... /usr/local/bin/darcs changes --quiet --repodir main
unrecord last 1
Running... /usr/local/bin/darcs unrecord --quiet --last 1 --all --repodir main
0.110191s
record
Running... /usr/local/bin/darcs record --quiet --all -m "test patch" --repodir main
0.062309s
obliterate last 1
Running... /usr/local/bin/darcs obliterate --quiet --last 1 --all --repodir main
1.185752s
pull 1
Running... /usr/local/bin/darcs pull --quiet --all --repodir main
1.117513s
obliterate last 50
Running... /usr/local/bin/darcs obliterate --quiet --last 50 --all --repodir main
1.53781s
pull 50
Running... /usr/local/bin/darcs pull --quiet --all --repodir main
1.582683s
obliterate last 500
Running... /usr/local/bin/darcs obliterate --quiet --last 500 --all --repodir main
14.193587s
pull 500
Running... /usr/local/bin/darcs pull --quiet --all --repodir main
9.192307s
--------------------8<-------------------

As you can see, it marks benches.  There is some extra work to do, like making
the output nice and pretty, or generating comparisons with other versions of
darcs.  Perhaps even doing automated performance regression testing (raise an
alarm if darcs ever becomes 3X slower, or something like that).

Help definitely wanted, even if it's "just" to write benchmarks.

All are invited to join in!  If you've started working on something like this,
you should especially consider submitting some code.  Unfortunately, the mailing
list settings are a bit strict,; your message will be rejected if you do not
first subscribe.  We're working on loosening that up a bit.  Also, we're
experimenting with a liberal right-to-push model, so just ask if you want to be
able to push patches in directly.
msg3721 (view) Author: droundy Date: 2008-03-03.15:53:45
On Mon, Mar 03, 2008 at 03:49:02PM -0000, Eric Kow wrote:
> We are developing a Haskell benchmarking library called Maybench.  The project
> also includes a darcs-specific wrapper (darcs-benchmark), which we are primarily
> focusing on.
> 
> See http://code.google.com/p/maybench
> 
> Or darcs get http://code.haskell.org/maybench

I should perhaps mention that darcs-benchmark isn't currently working for
me... I'm not sure why.  It hangs on the call to darcs record.  :(
-- 
David Roundy
Department of Physics
Oregon State University
msg5291 (view) Author: kowey Date: 2008-08-06.16:37:00
reviving this bug:

we should collaborate with zooko to get benchmarks into the buildbot infrastructure

need also the standard darcs benchmarks and a way to extend them
msg5292 (view) Author: zooko Date: 2008-08-06.17:17:03
I will do some work on this soon.  Let's see, when, exactly?  Not Saturday. 
Maybe Sunday.  Maybe we could schedule an IRC-based sprint?  That would be
helpful to me.  --Z
msg5293 (view) Author: zooko Date: 2008-08-06.17:18:30
Here's an example of the kind of automated benchmarking that I (mostly) know how
to set up:

buildbot which runs benchmarking scripts at regular intervals and/or on new code
checkins:

http://allmydata.org/buildbot/waterfall?show_events=false&branch=&builder=memcheck-32&builder=memcheck-64&builder=speed-DSL&builder=speed-colo&reload=none

munin graphs of the benchmark results over time:

http://allmydata.org/trac/tahoe/wiki/Performance
msg5294 (view) Author: zooko Date: 2008-08-06.17:19:13
(I say I "mostly" know how to set that up because Brian Warner did a lot of it
and I may need some help writing munin plugins.)
msg5295 (view) Author: kowey Date: 2008-08-06.17:27:09
> I will do some work on this soon.  Let's see, when, exactly?  Not Saturday.
> Maybe Sunday.  Maybe we could schedule an IRC-based sprint?  That would be
> helpful to me.  --Z

I'm around Sunday evening.  Could we say starting 19:30 UTC?
msg5297 (view) Author: zooko Date: 2008-08-06.17:45:21
On Aug 6, 2008, at 11:27 AM, Eric Kow wrote:
> I'm around Sunday evening.  Could we say starting 19:30 UTC?

Cool!  Thanks.

I have to add the caveat that if I do exceptionally well at a Magic  
the Gathering tournament which starts Saturday, then I might still be  
engaged in it on Sunday.

However, more than likely I'll be knocked out of the competition on  
Saturday and will have nothing to do Sunday but housework, playing  
with my children and setting up darcs benchmarking infrastructure.  :-)

--Z
msg5357 (view) Author: zooko Date: 2008-08-10.15:53:26
I'm sorry, but there is a Internet outage in my neighborhood, so I probably
can't participate interactively online.

http://www.dailycamera.com/news/2008/aug/08/south-boulder-phone-repair-70-percent-finished/

While I'm offline I will be trying out Gwern's recent patches to build darcs on
Windows using cabal to see if that works better for me that the current autoconf
build system.

I will post to this ticket the source code to the allmydata.org benchmarking
scripts which I referenced in msg5293.  If anyone wants to get started, you
could start by modelling a darcs benchmark script on those tahoe benchmark scripts.
msg5358 (view) Author: zooko Date: 2008-08-10.17:10:58
Okay, here are my notes about how to make automated benchmarking for darcs,
based on how Brian Warner did it, with some help from me, for Tahoe.  Other
people are more than welcome to take these notes and run with it, and also when
I get back around to this task these notes will help me.

The darcs buildmaster is configured by this file:

http://buildbot.darcs.net/master.cfg

which is linked from the buildmaster page:

http://buildbot.darcs.net

The classes defined in that master.cfg file define different "build steps" which
do not come with buildbot, for example a "build step" to invoke darcs in order
to figure out its version number and then add that version number to the
waterfall.  You can see the output of that build step, which has class name
"DarcsVersion", on this waterfall:
http://buildbot.darcs.net/waterfall?builder=ertai+Linux-2.6.18+i686 .

The "make_factory()" function in the darcs master.cfg shows which build steps
get executed for each builder.

In the allmydata.org Tahoe buildmaster config there are some other build step
classes defined, like this:



class CheckMemory(ShellCommand):
    name = "check-memory"
    description = ["checking", "memory", "usage"]
    command = ["make", "check-memory"]
    logfiles = {"stats": "_test_memory/stats.out",
                "nodelog": "_test_memory/client.log",
                "driver": "_test_memory/driver.log",
                }

    def __init__(self, platform, *args, **kwargs):
        ShellCommand.__init__(self, *args, **kwargs)
        self.addFactoryArguments(platform=platform)
        self.platform = platform

    def createSummary(self, cmd):
        self.memstats = []
        # TODO: buildbot wants to have BuildStep.getLog("stats")
        for l in self.step_status.getLogs():
            if l.getName() == "stats":
                break
        else:
            return
        fn = open("tahoe-memstats-%s.out" % self.platform, "w")
        for line in l.readlines():
            fn.write(line)
            if ":" not in line:
                continue
            name, value = line.split(":")
            value = int(value.strip())
            self.setProperty("memory-usage-%s" % name, value)
            self.memstats.append( (name,value) )
        fn.close()

    def getText(self, cmd, results):
        text = ["memory", "usage"]
        modes = []
        peaks = {}
        for name, value in self.memstats:
            # current values of 'name' are:
            #  upload {0B,10kB,10MB,50MB}
            #  upload-POST {0B,10kB,10MB,50MB}
            #  download {0B,10kB,10MB,50MB}
            #  download-GET {0B,10kB,10MB,50MB}
            #  download-GET-slow {0B,10kB,10MB,50MB}
            #  receive {0B,10kB,10MB,50MB}
            #
            # we want to make these as short as possible to keep the column
            # narrow, so strings like "up-10k: 22M" and "POST-10M: 54M".
            # Also, only show the largest test of each type.
            mode, size = name.split()
            mode = {"upload": "up",
                    "upload-POST": "post",
                    "upload-self": "self",
                    "download": "down",
                    "download-GET": "get",
                    "download-GET-slow": "slow",
                    "receive": "rx",
                    }.get(mode, mode)
            if value >= 1e6:
                value_s = "%dM" % (value / 1e6)
            elif value >= 1e3:
                value_s = "%dk" % (value / 1e3)
            else:
                value_s = "%d" % value
            if size != "init":
                size_s = size
                if size_s.endswith("B"):
                    size_s = size_s[:-1]
                size_int = self._convert(size)
                if mode not in modes:
                    modes.append(mode)
                if mode not in peaks or (size_int > peaks[mode][0]):
                    peaks[mode] = (size_int, size_s, value_s)
            if name == "upload init":
                text.append("init: %s" % (value_s,))
        for mode in modes:
            size, size_s, value_s = peaks[mode]
            text.append("%s-%s: %s" % (mode, size_s, value_s))
        return text

    def _convert(self, value):
        if value.endswith("B"):
            value = value[:-1]
        if value.endswith("M"):
            return 1e6 * int(value[:-1])
        elif value.endswith("k"):
            return 1e3 * int(value[:-1])
        else:
            return int(value)



and:


class CheckSpeed(ShellCommand):
    name = "check-speed"
    description = ["running", "speed", "test"]
    descriptionDone = ["speed", "test"]

    def __init__(self, clientdir, linkname, *args, **kwargs):
        ShellCommand.__init__(self, *args, **kwargs)
        self.addFactoryArguments(clientdir=clientdir, linkname=linkname)
        self.command = ["make", "check-speed", "TESTCLIENTDIR=%s" % clientdir]
        self.linkname = linkname

    def createSummary(self, cmd):
        for l in self.step_status.getLogs():
            if l.getName() == "stdio":
                break
        else:
            return
        for line in l.readlines():
            if ":" not in line:
                continue
            line = line.strip()
            # we're following stdout here, so random deprecation warnings and
            # whatnot will also have ":" in them and might confuse us.
            name, value = line.split(":", 1)
            # we record Ax+B in build properties
            if name == "upload per-file time":
                self.setProperty("upload-B", self.parse_seconds(value))
            elif name.startswith("upload speed ("):
                # later tests (with larger files) override earlier ones
                self.setProperty("upload-A", self.parse_rate(value))
            elif name == "download per-file time":
                self.setProperty("download-B", self.parse_seconds(value))
            elif name.startswith("download speed ("):
                self.setProperty("download-A", self.parse_rate(value))

            elif name == "download per-file times-avg-RTT":
                self.setProperty("download-B-RTT", float(value))
            elif name == "upload per-file times-avg-RTT":
                self.setProperty("upload-B-RTT", float(value))

            elif name == "create per-file time SSK":
                self.setProperty("create-B-SSK", self.parse_seconds(value))
            elif name == "upload per-file time SSK":
                self.setProperty("upload-B-SSK", self.parse_seconds(value))
            elif name.startswith("upload speed SSK ("):
                self.setProperty("upload-A-SSK", self.parse_rate(value))
            elif name == "download per-file time SSK":
                self.setProperty("download-B-SSK", self.parse_seconds(value))
            elif name.startswith("download speed SSK ("):
                self.setProperty("download-A-SSK", self.parse_rate(value))

    def parse_seconds(self, value):
        if value.endswith("s"):
            value = value[:-1]
        return float(value)

    def parse_rate(self, value):
        if value.endswith("MBps"):
            return float(value[:-4]) * 1e6
        if value.endswith("kBps"):
            return float(value[:-4]) * 1e3
        if value.endswith("Bps"):
            return float(value[:-4])

    def format_seconds(self, s):
        # 1.23s, 790ms, 132us
        s = float(s)
        if s >= 1.0:
            return "%.2fs" % s
        if s >= 0.01:
            return "%dms" % (1000*s)
        if s >= 0.001:
            return "%.1fms" % (1000*s)
        return "%dus" % (1000000*s)

    def format_rate(self, r):
        # 21.8kBps, 554.4kBps 4.37MBps
        r = float(r)
        if r > 1000000:
            return "%1.2fMBps" % (r/1000000)
        if r > 1000:
            return "%.1fkBps" % (r/1000)
        return "%dBps" % r

    def getText(self, cmd, results):
        text = ["speed"]
        f = open("tahoe-speed-%s.out" % self.linkname, "w")
        try:
            up_A = self.getProperty("upload-A")
            f.write("upload-A: %f\n" % up_A)
            up_B = self.getProperty("upload-B")
            f.write("upload-B: %f\n" % up_B)
            text.extend(["up:",
                         self.format_seconds(up_B),
                         self.format_rate(up_A)])
            up_B_rtt = self.getProperty("upload-B-RTT")
            f.write("upload-B-RTT: %f\n" % up_B_rtt)
        except KeyError:
            pass

        try:
            down_A = self.getProperty("download-A")
            f.write("download-A: %f\n" % down_A)
            down_B = self.getProperty("download-B")
            f.write("download-B: %f\n" % down_B)
            text.extend(["down:",
                         self.format_seconds(down_B),
                         self.format_rate(down_A)])
            down_B_rtt = self.getProperty("download-B-RTT")
            f.write("download-B-RTT: %f\n" % down_B_rtt)
        except KeyError:
            pass

        try:
            create_B_SSK = self.getProperty("create-B-SSK")
            up_B_SSK = self.getProperty("upload-B-SSK")
            up_A_SSK = self.getProperty("upload-A-SSK")
            f.write("create-B-SSK: %f\n" % create_B_SSK)
            # create-B-SSK used to be upload-B-SSK
            f.write("upload-B-SSK: %f\n" % up_B_SSK)
            f.write("upload-A-SSK: %f\n" % up_A_SSK)
            down_B_SSK = self.getProperty("download-B-SSK")
            down_A_SSK = self.getProperty("download-A-SSK")
            f.write("download-B-SSK: %f\n" % down_B_SSK)
            f.write("download-A-SSK: %f\n" % down_A_SSK)
            text.extend(["SSK:",
                         "c:%s" % self.format_seconds(create_B_SSK),
                         "u:%s" % self.format_seconds(up_B_SSK),
                         self.format_rate(up_A_SSK),
                         "d:%s" % self.format_seconds(down_B_SSK),
                         self.format_rate(down_A_SSK),
                         ])
        except KeyError:
            pass

        f.close()
        return text


Both of these subclass the standard ShellCommand build step class, and set their
self.command variable to "make something or other".  Therefore, when this step
gets executed, it executes "make something or other" in a subprocess (this also
works on Windows, by the way).  That subprocess does all of the actual work of
running the Tahoe Least-Authority Filesystem and measuring its performance, and
it emits its results on stdout in a format that this code parses, in this code's
"createSummary()" method.

This code then emits two things -- 1. measurements included in the rectangles in
the tahoe waterfall, visible here: http://allmydata.org/buildbot/waterfall , and
2. numbers written into a file in the format which can be consumed by munin. 
Munin is described here: http://munin.projects.linpro.no/ , and the graphs that
munin emits based on the numbers that this script writes into the file are
hyperlinked from here: http://allmydata.org/trac/tahoe/wiki/Performance .


Finally, the Tahoe buildmaster has a line like this to add the CheckMemory
buildstep to the chain of buildsteps which is set up in the make_factory() function:

    f.addStep(CheckMemory(platform))
msg5364 (view) Author: kowey Date: 2008-08-10.20:22:22
Thanks! Is there a repository where this config file is stored?

There is some basic infrastructure we need to set up : slaves should keep
synched a directory full of standard tarballs (real world repositories).  I'd
like to see next a simple test which consists simply in decanting one of the
standard tarballs (we should start with something small) and running darcs
command on it.

It may be nice to have a test version of buildbot.darcs.net that I could push
patches to.

No actual speed or mem testing there, just infrastructure :-) Sorry not to help
out more.

Shall we reschedule for another time?  My schedule is Tue - Thu nights (UTC+1)
and weekends (except for Sunday afternoon).  I could also be around around 08:00
or 09:00 mornings.
msg5614 (view) Author: kowey Date: 2008-08-20.09:00:45
Just updating everybody.  We are making progress!  Thanks to Zooko, buildbot now
does some simple time and memory benchmarking.  It uses maybench's
darcs-benchmark utility, which means that slaves will need to install maybench
at some point <http://hackage.haskell.org/cgi-bin/hackage-scripts/package/maybench>.

We can't close this ticket yet, because there is still a bit of work to do:

(1) we need to tweak the call to darcs-benchmark so that it runs on the darcs we
just compiled, and not the system darcs (oops)

(2) in addition to the automatically generated darcs-benchmark repositories
(thanks, Nicolas), we need to integrate a zoo of real world repositories

(3) we need some way of presenting and summarising the data that is generated... 

Summarising the data
--------------------
My goal is to have a place where we can instantly get an idea "how are we doing
on performance compared to before?"

What I am thinking is that we could have a page that lists all the slaves and
presents a sparkline showing the time and memory for that slave:

linux-1   : timespark (number) memspark (number) 
windows-2 : timespark (number) memspark (number)
windows-3 : timespark (number) memspark (number)
...

Sparklines are just tiny graphs without any extraneous stuff (like axes). 
Because they are so small, you can fit them into text or just present a whole
bunch of them in one page.  In other words, they allow you to see a lot of
information at a glance.

I was thinking that by looking down the numbers in a vertical column, we can see
how different platforms compare.  By looking at the individual sparklines, we
can see how performance has improved over time, and by looking at the pageful of
sparklines, we can see how these improvements vary from system to system.  

Also, we could have it so that clicking on each slave name leads to a page
showing much more detailed and comprehensive information.

Where next
----------
I am meeting with Zooko on #darcs (Sunday 24 August, 19:00 UTC) so that we can
work on the repository zoo.  Suggestions and criticism welcome in the meantime!
msg5617 (view) Author: dagit Date: 2008-08-20.16:26:57
> (3) we need some way of presenting and summarising the data that is generated...

Two ideas on chart generation:
1) gnuplot
2) http://hackage.haskell.org/cgi-bin/hackage-scripts/package/GoogleChart

Jason
msg6458 (view) Author: kowey Date: 2008-10-25.16:59:53
Some progress:

 darcs get http://code.haskell.org/darcs/zoo

(This requires maybench on your system)
msg6549 (view) Author: kowey Date: 2008-10-31.09:05:48
The Standard Darcs Benchmarks now live in 

  darcs get http://code.haskell.org/darcs/darcs-benchmark

I declare that this issue is now resolved, although there are certainly a lot of
improvements to be made.  When darcs-benchmark has all the core features we need
(still need a better 'at-a-glance' view) we should consider putting it in the
darcs darcs repository
History
Date User Action Args
2008-02-06 17:24:01koweycreate
2008-02-06 17:32:22zookosetstatus: unread -> unknown
nosy: + zooko
messages: + msg3159
2008-02-06 17:33:24dagitsetnosy: + darcs-devel, dagit
messages: + msg3160
2008-02-06 19:02:11ertaisetnosy: + ertai
messages: + msg3163
2008-02-06 19:16:19koweysettopic: + Performance
nosy: droundy, tommy, beschmi, kowey, darcs-devel, zooko, dagit, ertai
2008-02-06 19:16:29koweysettopic: + ProbablyEasy
nosy: + jaredj
2008-02-07 04:39:08markstossetstatus: unknown -> deferred
nosy: + markstos
messages: + msg3174
title: automated benchmarking and comparison -> wish: automated benchmarking and comparison
2008-02-07 20:44:51droundysetnosy: droundy, tommy, beschmi, kowey, markstos, darcs-devel, zooko, dagit, jaredj, ertai
messages: + msg3214
2008-03-03 15:49:02koweysetnosy: droundy, tommy, beschmi, kowey, markstos, darcs-devel, zooko, dagit, jaredj, ertai
messages: + msg3719
2008-03-03 15:53:47droundysetnosy: droundy, tommy, beschmi, kowey, markstos, darcs-devel, zooko, dagit, jaredj, ertai
messages: + msg3721
2008-08-06 16:37:06koweysetpriority: wishlist -> feature
status: deferred -> unknown
topic: - ProbablyEasy
messages: + msg5291
nosy: - droundy, darcs-devel
2008-08-06 17:17:06zookosetnosy: tommy, beschmi, kowey, markstos, zooko, dagit, jaredj, ertai
messages: + msg5292
assignedto: zooko
2008-08-06 17:18:33zookosetnosy: tommy, beschmi, kowey, markstos, zooko, dagit, jaredj, ertai
messages: + msg5293
2008-08-06 17:19:16zookosetnosy: tommy, beschmi, kowey, markstos, zooko, dagit, jaredj, ertai
messages: + msg5294
2008-08-06 17:27:14koweysetnosy: tommy, beschmi, kowey, markstos, zooko, dagit, jaredj, ertai
messages: + msg5295
2008-08-06 17:45:25zookosetnosy: tommy, beschmi, kowey, markstos, zooko, dagit, jaredj, ertai
messages: + msg5297
2008-08-10 15:53:29zookosetnosy: tommy, beschmi, kowey, markstos, zooko, dagit, jaredj, ertai
messages: + msg5357
2008-08-10 17:11:02zookosetnosy: tommy, beschmi, kowey, markstos, zooko, dagit, jaredj, ertai
messages: + msg5358
2008-08-10 20:22:25koweysetnosy: tommy, beschmi, kowey, markstos, zooko, dagit, jaredj, ertai
messages: + msg5364
2008-08-17 09:05:12koweylinkissue1009 superseder
2008-08-20 09:00:48koweysetnosy: tommy, beschmi, kowey, markstos, zooko, dagit, jaredj, ertai
messages: + msg5614
2008-08-20 16:27:00dagitsetnosy: tommy, beschmi, kowey, markstos, zooko, dagit, jaredj, ertai
messages: + msg5617
2008-10-25 16:59:55koweysetnosy: + dmitry.kurochkin, simon, thorkilnaur
messages: + msg6458
assignedto: zooko -> kowey
2008-10-31 09:05:51koweysetstatus: unknown -> resolved
nosy: tommy, beschmi, kowey, markstos, zooko, dagit, simon, thorkilnaur, jaredj, ertai, dmitry.kurochkin
messages: + msg6549
assignedto: kowey ->
2009-08-06 20:56:39adminsetnosy: - beschmi
2009-08-11 00:06:14adminsetnosy: - dagit
2009-08-25 17:32:29adminsetnosy: + darcs-devel, - simon
2009-08-27 14:13:26adminsetnosy: tommy, kowey, markstos, darcs-devel, zooko, thorkilnaur, jaredj, ertai, dmitry.kurochkin
2009-10-23 22:39:40adminsetnosy: + nicolas.pouillard, - ertai
2009-10-24 00:04:43adminsetnosy: + ertai, - nicolas.pouillard