It is notable that, despite it being over 2 years since the question was first posted, the vast majority of votes on the Underload answer came on the first day: the Underload answer was +10 on the first day and is now +13. Likewise, the Ruby and JavaScript answers were +8 and +7 and are now both at +9. The 7 answer is an outlier on this metric, being at +6 after one day and +13 today, which is one thing that makes me suspicious that this meta post may be biasing the voting. (However, another possibility is that it's much higher up the page than the other high-scoring posts when sorting by "active", which many of the more dedicated CGCC users use when reviewing old posts, due to my edits golfing off another byte in January last year – these may have interfered with the experiment, but OTOH I didn't want the experiment to interfere with the mai nsitemain site.)
It is notable that, despite it being over 2 years since the question was first posted, the vast majority of votes on the Underload answer came on the first day: the Underload answer was +10 on the first day and is now +13. Likewise, the Ruby and JavaScript answers were +8 and +7 and are now both at +9. The 7 answer is an outlier on this metric, being at +6 after one day and +13 today, which is one thing that makes me suspicious that this meta post may be biasing the voting. (However, another possibility is that it's much higher up the page than the other high-scoring posts when sorting by "active", which many of the more dedicated CGCC users use when reviewing old posts, due to my edits golfing off another byte in January last year – these may have interfered with the experiment, but OTOH I didn't want the experiment to interfere with the mai nsite.)
It is notable that, despite it being over 2 years since the question was first posted, the vast majority of votes on the Underload answer came on the first day: the Underload answer was +10 on the first day and is now +13. Likewise, the Ruby and JavaScript answers were +8 and +7 and are now both at +9. The 7 answer is an outlier on this metric, being at +6 after one day and +13 today, which is one thing that makes me suspicious that this meta post may be biasing the voting. (However, another possibility is that it's much higher up the page than the other high-scoring posts when sorting by "active", which many of the more dedicated CGCC users use when reviewing old posts, due to my edits golfing off another byte in January last year – these may have interfered with the experiment, but OTOH I didn't want the experiment to interfere with the main site.)
UPDATE 4: It's now a little over a year since update 3 was posted. The Underload and 7 answers are both on 13 points each (which is interesting, because this means the Underload score has a lower answer than previously: I wonder whether that was an unupvote or a downvote?), with the Ruby and JavaScript answers on 9 points.
I'm wondering whether traffic to the question in question is now predominantly from this meta post, rather than from the main site itself: that might help to explain the equalisation of scores. In any case, with the scores equalising, the experiment is now effectively over because the equalisation of scores means that the FGITW effect has finally stopped affecting the order in which the answers are ranked (unless someone chooses to sort by "oldest").
It is notable that, despite it being over 2 years since the question was first posted, the vast majority of votes on the Underload answer came on the first day: the Underload answer was +10 on the first day and is now +13. Likewise, the Ruby and JavaScript answers were +8 and +7 and are now both at +9. The 7 answer is an outlier on this metric, being at +6 after one day and +13 today, which is one thing that makes me suspicious that this meta post may be biasing the voting. (However, another possibility is that it's much higher up the page than the other high-scoring posts when sorting by "active", which many of the more dedicated CGCC users use when reviewing old posts, due to my edits golfing off another byte in January last year – these may have interfered with the experiment, but OTOH I didn't want the experiment to interfere with the mai nsite.)
UPDATE 4: It's now a little over a year since update 3 was posted. The Underload and 7 answers are both on 13 points each (which is interesting, because this means the Underload score has a lower answer than previously: I wonder whether that was an unupvote or a downvote?), with the Ruby and JavaScript answers on 9 points.
I'm wondering whether traffic to the question in question is now predominantly from this meta post, rather than from the main site itself: that might help to explain the equalisation of scores. In any case, with the scores equalising, the experiment is now effectively over because the equalisation of scores means that the FGITW effect has finally stopped affecting the order in which the answers are ranked (unless someone chooses to sort by "oldest").
It is notable that, despite it being over 2 years since the question was first posted, the vast majority of votes on the Underload answer came on the first day: the Underload answer was +10 on the first day and is now +13. Likewise, the Ruby and JavaScript answers were +8 and +7 and are now both at +9. The 7 answer is an outlier on this metric, being at +6 after one day and +13 today, which is one thing that makes me suspicious that this meta post may be biasing the voting. (However, another possibility is that it's much higher up the page than the other high-scoring posts when sorting by "active", which many of the more dedicated CGCC users use when reviewing old posts, due to my edits golfing off another byte in January last year – these may have interfered with the experiment, but OTOH I didn't want the experiment to interfere with the mai nsite.)
Full disclosure: I improved the 7 code in January 2022 (a little less than a year after the question was posted), golfing off 2 characters / 1 byte. So maybe the Underload code was "better" in that it was better-golfed. On the other hand, maybe this is an argument that even though 10 minutes isn't enough time to produce a good answer, 3 hours isn't enough time either, and crafting a really good answer to a question can take days or months of thought.
Full disclosure: I improved the 7 code in January 2022 (a little less than a year after the question was posted), golfing off 2 characters / 1 byte. So maybe the Underload code was "better" in that it was better-golfed. On the other hand, maybe this is an argument that even though 10 minutes isn't enough time to produce a good answer, 3 hours isn't enough time either, and crafting a really good answer to a question can take days or months of thought.