Created on 2022-06-21.16:11:38 by bfrk, last changed 2023-07-29.10:48:58 by ganesh.
See mailing list archives
for discussion on individual patches.
msg23035 (view) |
Author: bfrk |
Date: 2022-06-21.16:11:36 |
|
Most of them I had in my development branch for a long time (which I
actively use), though they are freshly rebased for cleanup.
The fact that findUncommon previously was stricter than necessary gave me
some headaches. Its refactor in terms of findCommon broke one of the tests
for laziness, and for a long time It didn't occur to me that the breakage
was caused by darcs being *more* lazy, not less, though not in the command
under test but a different one we issued before cutting off access to the
parent repo. I chose to fix the test script, rather than revert the
refactor, since being more lazy wrt inaccessible patch content is a good
thing.
16 patches for repository http://darcs.net/screened:
patch b4f21fd8a224fbcdd0b3f760a9bee365be63bba6
Author: Ben Franksen <ben.franksen@online.de>
Date: Sat Jun 26 16:28:21 CEST 2021
* simplify taggedIntersection
This is not a pure refactor: it eliminates one of the branches of the
original code, namely the one where we used hopefullyM to decide whether the
content of the tag on the RHS is available. If it was, we did not try to
reorder patches locally to create a Tagged section for that tag but rather
gave up on that tag and recursively tried the next Tagged section of the
RHS. This simply makes no sense and I therefore eliminated that distinction.
This allowed a few more refactors that make the code more readable.
patch 05cfd390aef7e1c03aea5018cd489ac3d80442c4
Author: Ben Franksen <ben.franksen@online.de>
Date: Mon Jun 28 09:01:08 CEST 2021
* D.P.Depends: clarify the meaning of functions local to getUncovered
We make it clear that they are really graph algorithms, generalizing
PatchInfo to any type supporting decidable equality; see the comments for
details. This also removes the Maybe layer for the list of outpoing edges
(dependencies) associated with a vertex (patch): a non-tag vertex or a tag
vertex for which we don't have the patch content gets an empty list of
dependencies.
patch 99a2b72c2ba82190a92789ca5500f243ef796468
Author: Ben Franksen <ben.franksen@online.de>
Date: Mon Jun 28 09:14:20 CEST 2021
* D.P.Depends: optimize concatenation order in dropDepsIn
The most common case will be tags that are "almost clean", that is, the
previous tag we encountered will have few remaining dependencies when we
encounter the next tag. This means it is more efficient to append the new
dependencies, rather than prepending them.
patch e64c99c26c29ebca1f02077ddc1ad7dc3e601c0e
Author: Ben Franksen <ben.franksen@online.de>
Date: Fri Sep 17 16:06:37 CEST 2021
* D.P.Depends.areUnrelatedRepos: document and inline local function checkit
patch 61b5d3dce9a53211c25e02997e4e654a25f6a847
Author: Ben Franksen <ben.franksen@online.de>
Date: Tue Jun 14 00:13:28 CEST 2022
* better explain how splitOnTag works
patch 635428d9c8618664b2c478a413feaea4dffd1875
Author: Ben Franksen <ben.franksen@online.de>
Date: Thu Jul 22 21:08:29 CEST 2021
* refactor findCommonWithThem and findUncommon
They are both special versions of findCommonAndUncommon.
patch 59aa3c5640ac0e1b7174a9c037ff5473e214bf44
Author: Ben Franksen <ben.franksen@online.de>
Date: Thu Sep 16 10:20:26 CEST 2021
* findCommonAndUncommon -> findCommon, inline findCommonWithThem, findUncommon
Instead of specialising findCommon trvially, provide generic conversion
functions for Forks: dropLeft, dropRight, and dropCommon. However, in most
cases we immediately pattern match on the result anyway, so it makes more
sense to just do that with the resulting Fork, using wildcard matches for
the parts we are not interested in.
patch 539b792edfb1007977b6d80e86b1484324664a47
Author: Ben Franksen <ben.franksen@online.de>
Date: Sun Jul 18 17:18:55 CEST 2021
* tests/network/lazy-clone.sh: run with or without cache
Note that with cache we have one more patch available in the last test.
patch 7488a0738d35be3d73e1e174d6cfe81d23b3a628
Author: Ben Franksen <ben.franksen@online.de>
Date: Thu Sep 16 08:47:38 CEST 2021
* tests/network/lazy-clone.sh: weaken requirements in last test
In this test we make a lazy clone of the first tag, drop the connection to
the upstream repo, and then expected `darcs log -v` to succeed. This used to
work, but only in this very special case, and only because of a loss of
laziness in the definition of Darcs.Patch.Depends.findUncommon. In general,
a lazy clone may be missing inventories, which makes `darcs log -v` fail if
the upstream repo is gone and we can't find the inventories in the global
cache. We now do a `darcs log` before disconnecting upstream in order to
guarantee that the repo has all inventories. In addition, we test that
`darcs log -v` works with the full lazy clone, too.
patch 849fa88082d5c87569bf1ae97a9252b0fa45d0bb
Author: Ben Franksen <ben.franksen@online.de>
Date: Sat Jul 3 07:45:21 CEST 2021
* refactor removeFromPatchSet using unwrapOneTagged
patch 15d2ed79d68afc5fdf567e58f420477b6171392b
Author: Ben Franksen <ben.franksen@online.de>
Date: Sat Jul 3 09:10:42 CEST 2021
* refactor findCommonAndUncommon in terms of a generalized findCommonRL
patch f2dcf2f79c41129178e001ad9b67439e9b8100db
Author: Ben Franksen <ben.franksen@online.de>
Date: Thu Sep 16 10:47:40 CEST 2021
* eliminate Darcs.Patch.Ident.merge2FL
In Darcs.Repository.Merge we already have the precondition that the input
sequences contain no common patches, so we can as well use 'merge' directly.
In the implementation of Darcs.Patch.Depends.mergeThem we now use findCommon
before applying merge to the uncommon parts. This gets us rid of yet another
function with a confusing name.
patch e09a44a83810b4656d43f4d5ea708a2ad61de524
Author: Ben Franksen <ben.franksen@online.de>
Date: Thu Sep 16 12:19:06 CEST 2021
* replace mergeThem with patchSetMerge
Again, since we always immediately pattern match on the result, we can as
well match on the result of patchSetMerge directly.
patch f8f697224e336f4a18a3c7a7dd92b78e99ab23b5
Author: Ben Franksen <ben.franksen@online.de>
Date: Thu Sep 16 12:46:46 CEST 2021
* refactor patchSetUnion and patchSetIntersection as folds
This includes a few simplifications: for the intersection of two PatchSets
we take the common part of findCommon, and for the union we use appendPSFL.
Note that the simple definition of patchSetUnion as a left fold requires
that we optimize patchSetMerge for the case where one of the arguments is
empty, otherwise we loose a lot of efficiency.
patch ad2a3eb9783302f631ee8a2374438a8f8121d5bc
Author: Ben Franksen <ben.franksen@online.de>
Date: Fri Sep 17 09:58:09 CEST 2021
* add/extend haddocks in D.P.Depends (includes a few minor layout changes)
patch 2fc8aaf82801046ce346a1b2c2bb480d0ae426fd
Author: Ben Franksen <ben.franksen@online.de>
Date: Sat Jun 18 20:30:07 CEST 2022
* fix bug in getUncovered
Using hopefullyM to return an empty list of explicit dependencies for an
unavailable tag is obviously wrong in general, that is, unless we already
know the tag is clean and it is the last patch under consideration. The way
we use getUncovered, this was (thankfully) not a fatal problem: at worst it
made getUncovered return too many patch names, which can only lead to
inefficiencies: a tag may depend on more patches than necessary,
taggedIntersection may do more work than necessary.
Attachments
|
msg23415 (view) |
Author: ganesh |
Date: 2023-06-24.22:59:14 |
|
> This is not a pure refactor: it eliminates one of the branches of the
> original code, namely the one where we used hopefullyM to decide whether
> the content of the tag on the RHS is available. If it was, we did not
> try to reorder patches locally to create a Tagged section for that tag
> but rather gave up on that tag and recursively tried the next Tagged
> section of the RHS. This simply makes no sense and I therefore eliminated
> that distinction.
I'm still trying to get to grips with the old logic but here's my rough
understanding so far:
- `s1` is a local repo so cheap to access, `s2` might be remote or lazy (?)
with unavailable patches.
- We walk through `s2` looking for clean tags. When we find one, if it's
also clean in `s1` we're done. If not, we have a choice: try to make
it clean in `s1` or go looking for an alternative. If we think we can
easily find an alternative because the patches are available, then that's
what we do.
Is that right?
|
msg23447 (view) |
Author: bfrk |
Date: 2023-06-27.20:28:07 |
|
Yes, I think this is basically the idea behind the previous logic.
Thanks for working that out!
This may indeed work if s2 is a lazy, local repo. But if s2 is
remote than this approach means we have to download another remote
inventory and its patches; perhaps even the whole repository, if the
tag isn't found. This takes a lot more time than locally making the
tag clean, since commuting patches with a tag doesn't involve any
costly commutes, we just have to compare patch infos.
|
msg23448 (view) |
Author: ganesh |
Date: 2023-06-27.20:31:11 |
|
Is this another situation where we'd benefit from a test case that
captures the expected laziness behaviour?
|
msg23454 (view) |
Author: bfrk |
Date: 2023-06-28.11:51:12 |
|
I think the tests in issue2293-laziness.sh cover it pretty well. I
can't count the number of times I experimented with changes in
D.P.Depends and then stumbled over this test failing, because I had
inadvertently destroyed laziness.
|
msg23466 (view) |
Author: ganesh |
Date: 2023-07-01.15:32:45 |
|
[I said]
> If not, we have a choice: try to make it clean in `s1` or go looking
> for an alternative. If we think we can easily find an alternative
> because the patches are available, then that's what we do.
[Ben said]
> But if s2 is remote than this approach means we have to download
> another remote inventory and its patches
The logic that I described as "think we can easily find an alternative
because the patches are available" was expressed as `hopefullyM t2`
returning `Just`.
I think you're saying that this wasn't actually really a true description
in the case of a remote repository, i.e. it would claim that the
patches are available but we'd then end up actually downloading things?
|
msg23467 (view) |
Author: ganesh |
Date: 2023-07-01.17:06:07 |
|
> * D.P.Depends: clarify the meaning of functions local to
getUncovered
> * D.P.Depends: optimize concatenation order in dropDepsIn
> * D.P.Depends.areUnrelatedRepos: document and inline local function
checkit
> * better explain how splitOnTag works
All OK
|
msg23468 (view) |
Author: ganesh |
Date: 2023-07-01.17:19:13 |
|
> * refactor findCommonWithThem and findUncommon
> They are both special versions of findCommonAndUncommon.
This causes findCommonWithThem to do extra work, namely
this step:
partitionFL (infoIn us') $ reverseRL them'
Are you sure that doesn't matter?
|
msg23475 (view) |
Author: bfrk |
Date: 2023-07-01.20:15:00 |
|
> The logic that I described as "think we can easily find an alternative
> because the patches are available" was expressed as `hopefullyM t2`
> returning `Just`.
>
> I think you're saying that this wasn't actually really a true description
> in the case of a remote repository, i.e. it would claim that the
> patches are available but we'd then end up actually downloading things?
More or less, yes. A patch is unavailable whenever fetchFileUsingCache (and
then parsing the content) /fails/. If we never forced the patch by evaluating
its content, then we never cause fetchFileUsingCache to run, so we don't know
yet whether it is available or not. The logic is in
D.P.PatchInfoAnd.createHashed, where we catch exceptions and turn them into
Unavailable patches. The behavior you suggested would be achieved if we did not
include the remote repo in our sources, but that would mean we cannot have lazy
repos where patches are fetched (only) on demand.
|
msg23481 (view) |
Author: bfrk |
Date: 2023-07-01.21:18:06 |
|
>> * refactor findCommonWithThem and findUncommon
>> They are both special versions of findCommonAndUncommon.
>
> This causes findCommonWithThem to do extra work, namely
> this step:
>
> partitionFL (infoIn us') $ reverseRL them'
>
> Are you sure that doesn't matter?
Your observation is correct. And no, I am not sure it doesn't matter. I
was naively expecting findCommon to be lazy enough that it wouldn't
make any difference, but that was a mistake.
I just hacked up an example where `darcs send --dry-run --no-cache >
/dev/null` consistently takes about 0.3 seconds with darcs-2.16.5 and
0.8 seconds for darcs-screened.
I am pretty sure it would be possible to make findCommon(AndUncommon)
fully lazy, so that the difference goes away, but that requires
coercions and abandoning the checks, so perhaps not the best idea. I
think I will at least play with this a bit.
|
msg23483 (view) |
Author: bfrk |
Date: 2023-07-01.22:53:38 |
|
First, to clarify, the times I measured are not wall-clock times but user times.
Here are the full reports, with darcs-reviewed exactly as cloned, and darcs-
findcommon with two patches, the one we discuss and a semantic dependency that I
failed to declare explicitly ("refactor findCommonAndUncommon in terms of a
generalized findCommonRL"):
darcs-reviewed send --dry-run > /dev/null 0,23s user 0,04s system 13% cpu 1,986
total
darcs-findcommon send --dry-run > /dev/null 0,74s user 0,12s system 31% cpu
2,701 total
The version with the two patches consistently takes about 0.5 seconds longer.
To double-check I implemented a findCommonWithThemRL in D.P.Ident as
findCommonWithThemRL xs ys =
case partitionRL' (not . (`S.member` yIds) . ident) xs of
cxs :> NilFL :> xs' ->
reverseFL cxs :> xs'
_ -> error "failed to commute common patches (lhs)"
where
yIds = S.fromList (mapRL ident ys)
and used that directly in findCommonWithThem. This makes the difference go away.
What I find rather mysterious is that instead re-defining findCommonRL as
findCommonRL xs ys =
case partitionRL' (not . (`S.member` yids) . ident) xs of
cxs :> _nilflxs :> xs' ->
case partitionRL' (not . (`S.member` xids) . ident) ys of
_cys :> _nilflys :> ys' ->
Fork (reverseFL cxs) (unsafeCoercePStart xs') (unsafeCoercePStart ys')
where
xids = S.fromList (mapRL ident xs)
yids = S.fromList (mapRL ident ys)
does *not* fix it. It has the same run-time behavior as the original version
with the extra two patches. How can this be? None of the results of the second
partitionRL' should be demanded if you ignore the third component of the Fork,
which is *not* strict in its arguments and neither is (:>).
|
msg23484 (view) |
Author: ganesh |
Date: 2023-07-02.07:56:34 |
|
> How can this be? None of the results of the second partitionRL'
> should be demanded if you ignore the third component of the Fork,
case is strict, the second partitionRL' needs to be evaluated enough
to find the two :> constructors.
|
msg23485 (view) |
Author: bfrk |
Date: 2023-07-02.08:41:27 |
|
Sigh. It's the old problem with pattern matches on existentially quantified data
constructors being strict, see https://gitlab.haskell.org/ghc/ghc/issues/17130
and the long comment in D.P.Witnesses.Sealed. The following version has the
laziness that I naively expected and thus avoids the performance regression:
findCommonRL xs ys =
case (findCommonWithThemRL xs ys, findCommonWithThemRL ys xs) of
(pxs, pys) -> Fork (fstP pxs) (sndP pxs) (sndP pys)
where
fstP :: (p :> p) wX wY -> p wX wB
fstP (p :> _) = unsafeCoercePEnd p
sndP :: (p :> p) wX wY -> p wA wY
sndP (_ :> p) = unsafeCoercePStart p
Basically, for any existentially quantified constructor, one should always
imagine it has implicit bangs ('!') on its members. This is really a pain.
|
msg23486 (view) |
Author: bfrk |
Date: 2023-07-02.08:56:40 |
|
You are right that case is strict and I should have expected this. Nevertheless, the standard fix (using ~ to make it lazy) is not
allowed here, so the only way to restore laziness is with the ugly
unsafeCoerce.
|
msg23487 (view) |
Author: bfrk |
Date: 2023-07-02.09:10:38 |
|
The practical question then remains: how I should fix the
regression?
* Keep the refactors and use unsafeCoerce to force findCommonRL to
be lazy, removing the extra check that the common sequences are
really (nominally) equal in both "branches".
* Or revert the refactor i.e. re-add and use
D.P.Depends.findCommonWithThem.
I tend to prefer the former because I find it easier to understand
the code if it uses the same findCommon everywhere and because it's
a simple local change in D.P.Ident.
|
msg23488 (view) |
Author: bfrk |
Date: 2023-07-02.10:06:23 |
|
The main disadvantage is that the laziness of findCommonRL is one-
sided and not symmetric. In
case findCommonRL xs ys of
Fork common _ ys' -> ...
evaluating 'common' will trigger evaluation of the "partitionRL' ...
xs" part. And there is obviously no way to fix that.
This is a good argument for re-adding findCommonWithThem i.e.
rolling back (this part of) the refactor. OTOH, we already rely
(heavily) on the asymmetric performance characteristics of
findCommon.
I have trouble deciding what to do, so if you have a clear
preference, then I would probably go with that.
|
msg23489 (view) |
Author: ganesh |
Date: 2023-07-02.14:00:51 |
|
If I look at the code as it stands in reviewed, i.e. before this bundle,
my initial thoughts are that if we assume that findUncommon,
findCommonAndUncommon, and findCommonWithThem are the externally facing
interface of this area of code, then before a significant refactor we should
write some unit tests that capture the expected laziness behaviour.
I think this can be done by constructing lists of patches in code that have
"error" sprinkled in appropriate places where we don't expect them to get
evaluated.
If we then want to merge findCommonAndUncommon and findCommonWithThem,
any implementation choice is fine as long as it still passes the unit tests
for both.
I appreciate that the code in reviewed isn't quite the starting point
given these patches are in screened, but maybe the conceptual view can
still be applied.
Writing the tests is also not trivial but I'd be happy to help with that
in a few days.
|
msg23490 (view) |
Author: bfrk |
Date: 2023-07-03.17:30:10 |
|
Tests would certainly be appreciated if you have an idea how to do that.
But let's not forget that the primary goal is to avoid unnecessary commutations.
Laziness can help with that but it is not required.
After experimenting with various implementations I came to understand that (apart
from taggedIntersection which is common to all of them) the basic building block
for these functions is
findCommonWithThemRL xs ys = partitionRL (not . (`S.member` yids) . ident) xs
where
yids = S.fromList (mapRL ident ys)
This does the minimum necessary amount of commutation and is thus "optimal". And findCommonRL really cannot do much better than (the equivalent of) calling
findCommonWithThemRL twice i.e. once in both directions.
With this in mind, the idea of defining findCommonWithThem in terms of findCommon
is an instance of abstraction inversion, a well-known anti-pattern
(https://en.wikipedia.org/wiki/Abstraction_inversion).
Thus I am going to rollback these two patches:
* refactor findCommonWithThem and findUncommon
* findCommonAndUncommon -> findCommon, inline findCommonWithThem, findUncommon
except perhaps the rename of findCommonAndUncommon -> findCommon; findUncommon
requires both partitions, so we could eliminate it, OTOH it has the advantage that
we can directly apply merge to the result.
I still like the idea of factoring out the partitionRL calls as sketched above,
which would be in line with
* refactor findCommonAndUncommon in terms of a generalized findCommonRL
which I'd like to keep; and to make it utterly clear which is the more primitive
operation here, re-define findCommonRL in terms of findCommonWithThemRL.
|
msg23494 (view) |
Author: ganesh |
Date: 2023-07-07.22:31:45 |
|
> But let's not forget that the primary goal is to avoid unnecessary
> commutations. Laziness can help with that but it is not required.
Yeah, though my thought was that like with the test script itself,
laziness is an even stronger property than not commuting. So if it
holds, it's worth having.
> I still like the idea of factoring out the partitionRL calls as
> sketched above, which would be in line with
> * refactor findCommonAndUncommon in terms of a
> generalized findCommonRL
It sounds plausible, though I haven't read that patch very
carefully yet.
|
msg23495 (view) |
Author: bfrk |
Date: 2023-07-08.08:18:10 |
|
Following up on review according to the plan I sketched, notwithstanding
possible addition of tests for laziness.
1 patch for repository http://darcs.net/screened:
patch 9641dd95c57e388cf74a3650036640db4f8f7688
Author: Ben Franksen <ben.franksen@online.de>
Date: Mon Jul 3 10:18:11 CEST 2023
* re-add findCommonWithThem and findUncommon
This is a rollback of
patch 635428d9c8618664b2c478a413feaea4dffd1875
Author: Ben Franksen <ben.franksen@online.de>
Date: Thu Jul 22 21:08:29 CEST 2021
* refactor findCommonWithThem and findUncommon
They are both special versions of findCommonAndUncommon.
and most of
patch 59aa3c5640ac0e1b7174a9c037ff5473e214bf44
Author: Ben Franksen <ben.franksen@online.de>
Date: Thu Sep 16 10:20:26 CEST 2021
* findCommonAndUncommon -> findCommon, inline findCommonWithThem, findUncommon
Defining findCommonWithThem and findUncommon in terms of findCommon is bad
for two reasons. The first is that findCommonWithThem now re-orders the
second argument, too, doing more work than needed. This is due to
insufficient laziness and could be fixed with some ugly tricks involving
unsafe coercions. But more importantly, it is an abstraction inversion.
Basically, findCommonRL (from D.P.Ident) has to partition both its arguments
into common and uncommon parts; it doesn't make sense to use the combination
of these two underlying partitionings only to later drop one half. To make
this clear, in D.P.Ident we add additional functions findCommonWithThemFL/RL
and express findCommonFL/RL in terms of them; and also use them to implement
the re-added findCommonWithThem and findUncommon (in D.P.Depends).
Attachments
|
msg23502 (view) |
Author: ganesh |
Date: 2023-07-08.12:49:03 |
|
> * refactor findCommonWithThem and findUncommon
> * findCommonAndUncommon -> findCommon, inline findCommonWithThem, findUncommon
> * refactor findCommonAndUncommon in terms of a generalized findCommonRL
> * re-add findCommonWithThem and findUncommon
I've now reviewed the combined effect of these four patches and I'm fine
with them.
|
msg23503 (view) |
Author: ganesh |
Date: 2023-07-08.12:53:11 |
|
> * tests/network/lazy-clone.sh: run with or without cache
> * tests/network/lazy-clone.sh: weaken requirements in last test
OK
|
msg23504 (view) |
Author: ganesh |
Date: 2023-07-08.13:27:27 |
|
> * refactor removeFromPatchSet using unwrapOneTagged
> * eliminate Darcs.Patch.Ident.merge2FL
> * replace mergeThem with patchSetMerge
> * refactor patchSetUnion and patchSetIntersection as folds
> -- The first two special cases are semantically redundant but important
> -- for optimization; patchSetUnion below relies on that.
> patchSetMerge us (PatchSet NilRL NilRL) = NilFL :/\: patchSet2FL us
> patchSetMerge (PatchSet NilRL NilRL) them = patchSet2FL them :/\: NilFL
I didn't understand this comment. patchSetUnion is special-cased for the singleton list
so will only hit this case when there's an empty patch set in the actual list passed in,
which I think would be rare?
|
msg23505 (view) |
Author: ganesh |
Date: 2023-07-08.13:45:00 |
|
> * add/extend haddocks in D.P.Depends (includes a few minor layout changes)
OK
> * fix bug in getUncovered
OK (I had to think about this for a while!)
Is the behaviour decently covered by existing tests?
|
msg23512 (view) |
Author: bfrk |
Date: 2023-07-09.16:28:08 |
|
> I didn't understand this comment. [...]
Well spotted. Probably a left-over from a previous version of patchSetUnion.
1 patch for repository http://darcs.net/screened:
patch 831b57a1a7eeedffe726423e957f5829c9ac3c7a
Author: Ben Franksen <ben.franksen@online.de>
Date: Sun Jul 9 18:10:14 CEST 2023
* remove redundant special cases for patchSetMerge
The comment said that patchSetUnion relies on them for optimization but that
is clearly not the case (at least not anymore). Manual performance tests
confirm that there is no difference.
Attachments
|
msg23513 (view) |
Author: bfrk |
Date: 2023-07-10.07:27:04 |
|
>> * fix bug in getUncovered
>
> OK (I had to think about this for a while!)
Yes, this one is tricky. And now that I think about it again, I am
not sure the situation this patch avoids can actually happen unless
you manually fiddle with repository files. Specifically, even a lazy
clone contains all patches referenced by the head inventory, except
the first (i.e. the latest known clean tag itself). Thus, later
(possibly dirty) tags are always available. You'd have to manually
remove the patch file for such a tag to make it unavailable.
> Is the behaviour decently covered by existing tests?
I don't think we have any tests to check which patches a tag
explicitly depends on. For a test that fails without this patch we
need to construct a repo with a dirty tag T1 that is unavailable. We
then record a new tag T2 and check that it does not explicitly
depend on any patch covered by T1.
|
|
Date |
User |
Action |
Args |
2022-06-21 16:11:38 | bfrk | create | |
2022-06-21 16:16:15 | bfrk | set | status: needs-screening -> needs-review |
2023-06-24 22:59:14 | ganesh | set | status: needs-review -> review-in-progress messages:
+ msg23415 |
2023-06-27 20:28:07 | bfrk | set | messages:
+ msg23447 |
2023-06-27 20:31:11 | ganesh | set | messages:
+ msg23448 |
2023-06-28 11:51:13 | bfrk | set | messages:
+ msg23454 |
2023-07-01 15:32:47 | ganesh | set | messages:
+ msg23466 |
2023-07-01 17:06:08 | ganesh | set | messages:
+ msg23467 |
2023-07-01 17:19:13 | ganesh | set | messages:
+ msg23468 |
2023-07-01 20:15:02 | bfrk | set | messages:
+ msg23475 |
2023-07-01 21:18:07 | bfrk | set | messages:
+ msg23481 |
2023-07-01 22:53:40 | bfrk | set | messages:
+ msg23483 |
2023-07-02 07:56:35 | ganesh | set | messages:
+ msg23484 |
2023-07-02 08:41:27 | bfrk | set | messages:
+ msg23485 |
2023-07-02 08:56:40 | bfrk | set | messages:
+ msg23486 |
2023-07-02 09:10:38 | bfrk | set | messages:
+ msg23487 |
2023-07-02 10:06:23 | bfrk | set | messages:
+ msg23488 |
2023-07-02 14:00:51 | ganesh | set | messages:
+ msg23489 |
2023-07-03 17:30:10 | bfrk | set | messages:
+ msg23490 |
2023-07-07 22:31:47 | ganesh | set | messages:
+ msg23494 |
2023-07-08 08:18:11 | bfrk | set | files:
+ patch-preview.txt, re_add-findcommonwiththem-and-finduncommon.dpatch messages:
+ msg23495 |
2023-07-08 12:49:04 | ganesh | set | messages:
+ msg23502 |
2023-07-08 12:53:11 | ganesh | set | messages:
+ msg23503 |
2023-07-08 13:27:27 | ganesh | set | messages:
+ msg23504 |
2023-07-08 13:45:00 | ganesh | set | status: review-in-progress -> accepted-pending-tests messages:
+ msg23505 |
2023-07-09 16:28:08 | bfrk | set | files:
+ patch-preview.txt, remove-redundant-special-cases-for-patchsetmerge.dpatch messages:
+ msg23512 |
2023-07-10 07:27:04 | bfrk | set | messages:
+ msg23513 |
2023-07-29 10:48:58 | ganesh | set | status: accepted-pending-tests -> accepted |
|