On mutation, how it subtly occurs in single-threaded code, and how it can disrupt the process of upgrading single-threaded code to being multi-threaded.
I saw this after struggling on a multithreaded project for a few days. Its my first time dealing with these kinds of problems, and your thoughts have really helped me out. Thank you.
This is a fascinating read. It does add a little more mental "checks" when thinking about codebases though. I'm trying to keep effective code paths in mind when writing, but it's a little bit daunting knowing that moving to a multi-threaded environment would also require the consideration of read/write or sync/no-sync differences before collapsing things into a single effective code path.
Just a small point on the `TableInsert` and `TableLookup` section. It's not enough for just `TableInsert` to contain synchronisation.
If the solution is to put a mutex/rwlock in `TableInsert`. Then you must either also lock during calls to `TableLookup`, or apply fork/join style architecture, or rendezvous points to guarantee both functions are not called at the same time.
It may be that you addressed that when you mentioned bucketing the access. Perhaps it would be good to mention that there needs to be some synchronisation of the threads to ensure these "bucketing" phases stay separate?
This is true in the generic case but is not accounting for a previous section, which outlined some details which made this not true:
> But the details can change. Suppose that this effective codepath is used as a helper mechanism in two other overarching codepaths. In the first case, the call to ValFromKey is mutational in 99% of cases. In the second task, the call to ValFromKey is 100% non-mutational, and thus only doing a read-only lookup.
When I was referring to TableInsert, it would be used in this "first phase" work, and thus require synchronization. TableLookup, being a read-only operation, not overlapping with any TableInserts, can be completely synchronization free.
That makes total sense. And I'm in agreement with your article. Learning how to not synchronise is the most important part of writing performant multi-threaded code.
My point was just that you need to architect your code such that these calls don't overlap. And to do that you'd need to employ something like fork/join, or rendezvous points. I'm sure there are other ways too. Maybe it's just me, but I didn't feel like the article made that explicit enough. Hence my comment. Hopefully if anyone else has that concern this thread will set them at ease.
Thanks for the article, it was an entertaining and informative read.
Finally, as a concrete example using Rust. The rayon library allows one to easily implement fork/join, while the standard library's Barrier type can be used for rendezvous points.
I saw this after struggling on a multithreaded project for a few days. Its my first time dealing with these kinds of problems, and your thoughts have really helped me out. Thank you.
This is a fascinating read. It does add a little more mental "checks" when thinking about codebases though. I'm trying to keep effective code paths in mind when writing, but it's a little bit daunting knowing that moving to a multi-threaded environment would also require the consideration of read/write or sync/no-sync differences before collapsing things into a single effective code path.
Just a small point on the `TableInsert` and `TableLookup` section. It's not enough for just `TableInsert` to contain synchronisation.
If the solution is to put a mutex/rwlock in `TableInsert`. Then you must either also lock during calls to `TableLookup`, or apply fork/join style architecture, or rendezvous points to guarantee both functions are not called at the same time.
It may be that you addressed that when you mentioned bucketing the access. Perhaps it would be good to mention that there needs to be some synchronisation of the threads to ensure these "bucketing" phases stay separate?
This is true in the generic case but is not accounting for a previous section, which outlined some details which made this not true:
> But the details can change. Suppose that this effective codepath is used as a helper mechanism in two other overarching codepaths. In the first case, the call to ValFromKey is mutational in 99% of cases. In the second task, the call to ValFromKey is 100% non-mutational, and thus only doing a read-only lookup.
When I was referring to TableInsert, it would be used in this "first phase" work, and thus require synchronization. TableLookup, being a read-only operation, not overlapping with any TableInserts, can be completely synchronization free.
That makes total sense. And I'm in agreement with your article. Learning how to not synchronise is the most important part of writing performant multi-threaded code.
My point was just that you need to architect your code such that these calls don't overlap. And to do that you'd need to employ something like fork/join, or rendezvous points. I'm sure there are other ways too. Maybe it's just me, but I didn't feel like the article made that explicit enough. Hence my comment. Hopefully if anyone else has that concern this thread will set them at ease.
Thanks for the article, it was an entertaining and informative read.
Finally, as a concrete example using Rust. The rayon library allows one to easily implement fork/join, while the standard library's Barrier type can be used for rendezvous points.