> This is a good workaround for adding many files, if you can ensure there
> are no undesirable files lying around.
If that is an issue, you can supply filepath arguments in addition to
-l, just like with add.
> I'm wondering what other operations are affected by this large-pending
> slowdown. I guess recording changes to many existing files is not,
> judging by this:
>
> [10:59:46] recording 1000 files: real 0m1.020s
> [10:59:49] recording 2000 files: real 0m2.000s
> [[...]
Right. Bceause in that case the pending patch is empty. Hunks are
(normally) not in the pending patch, we commute them out the back,
because they can be re-calculated from the difference between working
tree and pristine tree. This (and your other tests) confirm my analysis:
the problem only appears if
(1) we have many changes in pending, and then
(2) record or amend many changes that are not in pending.
Now to a possible fix.
My first idea was to change the standard sort order for prims, but this
would affect the UI negatively so we shouldn't do that. My current plan
is to do with the newly recorded changes something similar as what we do
with pending when we "sift" it to get rid of hunks that can be commute
out the back. Roughly,
for_pend :> not_for_pend = partitionFL belongsInPending newly_recorded
where
belongsInPending = False for hunks and binaries, True otherwise
Then first try to remove for_pend from pending. With the rest
(not_for_pend) we would first try to remove them from working and from
pending only if that fails. This should avoid the quadratic trap in most
of the common cases including your test case.
Cheers
Ben
|