Comments on: How to deal with pseudoscience ? http://trilema.com/2014/how-to-deal-with-pseudoscience/ Moving targets for a fast crowd. 2025年11月12日 00:21:27 +0000 http://polimedia.us hourly 1 By: The naturalistic "fallacy" on Trilema - A blog by Mircea Popescu. http://trilema.com/2014/how-to-deal-with-pseudoscience/#comment-164309 The naturalistic "fallacy" on Trilema - A blog by Mircea Popescu. 2021年5月30日 09:16:51 +0000 http://trilema.com/?p=57508#comment-164309 [...] Now, supposing there's two sets, one consisting of a million natural numbers from one to one million, and the other consisting of the numbers 8, 75, 119`500 and 996`000 -- what does the utterance "That 996`000 is a good example of a million" say about the numeral 996`000 ? Nothing at all. At issue is not the value, but its context, what's said's entirely that "the largest value in your set is not so far off from the largest value in my canonical set". That's all ; and the exact same statement could be applied to any other two sets, with whatever other numbers -- it'd still not be about the god damned numbers. I say "it's a good example" because in my moral system goodness of examples is closeness, and my ethical approach readily resolves 996`000 as close enough to 1`000`000 in that context ; no more is involved whatsoever. There's nothing good about the number, there's nothing good about the example. Because I had pre-decided that in the case of examples closeness is what counts for goodness, therefore I said "it's a good example". It's not the number that "earned" it, it's my divine grace that granted it. Sola gratia et gratia sola ; the assignment does not flow the other way, there's nothing in that example that characterizes or qualifies goodness whatsoever. Should all eternity hence be spent assigning "good" to examples involving odd numbers, the "empirical observation" that "there's something about Mary odd numbers" will be and perpetually remain absolutely spurious. Unless and until I declare oddity good, oddity's entirely irrelevant, and stays irrelevant irrespective of "social sciences". [...] [...] Now, supposing there's two sets, one consisting of a million natural numbers from one to one million, and the other consisting of the numbers 8, 75, 119`500 and 996`000 -- what does the utterance "That 996`000 is a good example of a million" say about the numeral 996`000 ? Nothing at all. At issue is not the value, but its context, what's said's entirely that "the largest value in your set is not so far off from the largest value in my canonical set". That's all ; and the exact same statement could be applied to any other two sets, with whatever other numbers -- it'd still not be about the god damned numbers. I say "it's a good example" because in my moral system goodness of examples is closeness, and my ethical approach readily resolves 996`000 as close enough to 1`000`000 in that context ; no more is involved whatsoever. There's nothing good about the number, there's nothing good about the example. Because I had pre-decided that in the case of examples closeness is what counts for goodness, therefore I said "it's a good example". It's not the number that "earned" it, it's my divine grace that granted it. Sola gratia et gratia sola ; the assignment does not flow the other way, there's nothing in that example that characterizes or qualifies goodness whatsoever. Should all eternity hence be spent assigning "good" to examples involving odd numbers, the "empirical observation" that "there's something about Mary odd numbers" will be and perpetually remain absolutely spurious. Unless and until I declare oddity good, oddity's entirely irrelevant, and stays irrelevant irrespective of "social sciences". [...]

]]>
By: Mircea Popescu http://trilema.com/2014/how-to-deal-with-pseudoscience/#comment-164289 Mircea Popescu 2021年5月29日 15:23:30 +0000 http://trilema.com/?p=57508#comment-164289 Can't say as I have, and to my detriment -- there's some serious comedy bricks in there. Then again... this procedure whereby spinsters and wankers self-importantly inquire with "large" (in the sense of, <a href="http://trilema.com/2015/the-genetics-of-intelligence/#footnote_13_60664">small</a>) groups of <a href="http://trilema.com/2014/this-is-just-about-everyone-on-the-internet/">campus ditzes</a> what "their feelings" are on whatever X, then meticulously misinterpret the "results" towards whatever <a href="http://trilema.com/2021/covid-is-not-a-hoax-and-assorted-new-bureaucrat-mythology-misinformation-mendacious-if-self-serving-bullshit/">politruk pipe-dreams</a> momentarily fashionable is in no danger of ever being mistaken for science. How's the ozone hole layers doing these days btw ? In particular these idiots' procedures (and the related "Death Salience" morons, "measuring" word completion for chrissakes, did you complete sk___ with skull more often than with skill ? <a href="http://trilema.com/2021/degeneration-by-max-nordau-adnotated-diagnosis/?b=When,&e=doubled#select">MEASURABLE</a>!!!) come down to just so much repurposed Rorschach. Once that <a href="http://trilema.com/2020/forum-logs-for-23-jan-2018/#2399908">crystal ball</a> crashed open (in ten thousand shards of <a href="http://trilema.com/2019/thelastpsychiatristcom-8-characteristics-of-family-annihilators-adnotated/#footnote_13_85307">laughter</a>), the idiots did what they always do -- moved on. You wouldn't have expected them to <em>learn something</em>, which is to say <em>change</em>, I hope. They can't change, which is why they're <a href="http://trilema.com/2011/strategia-nulitatii/">where they are</a>. Old gypsy women (which is to say "clever", because hey, under such pressure as a demographic minority experiences in a nation-state, one'd better learn how to be clever) have an equally scientific scrying practice whereby... Actually, <a href="http://trilema.com/wp-content/uploads/2021/05/an-old-hand.jpg">here</a>, let me return the favour by publicizing some ethnographical notation of my own : <blockquote>Bobii, se aleg 41 de fire de păpuşoi[corn], la nevoie şi altă sămânţă, cum e fasolea, şi se învârt grămăgioară[pile] cu mâna dreaptă în dreapta, cum merge soarele, descântând: "41 de bobi, bine ştiţi, bine gâciţi[guess], dacă va fi să fie bine şi după gând, să pice pe 9, colac în prag şi bucurie într-amândouă mâinile, dar dacă nu, să pice unul şi doi şi să nu se aleagă nimică din voi!" [including the non-tenure threat, fancy that wonder, it's older than dirt!] Apoi face cruce cu mâna pe grăunţe şi le desparte în trei grămăgioare. O grămăgioară vine de-a stânga, acolo se cheamă "în mână străină", alta în mijloc, se numeşte "în casă", şi a treia în dreapta, aceasta se numeşte "în mâna sa", a celui ce caută. Din aceste grămăgioare, se numără – începând de la cea din mijloc – câte patru bobi şi tot se dau deoparte, iar cei ce rămân, fie 1, 2, 3 sau 4, se pun în linie sus, făcându-se tot astfeli şi cu celălalte două grămăgioare. Iară bobii rămaşi se amestecă şi iarăşi se face un rând de bobi gâcitori, puindu-se sub rândul de sus. Tot aşa se face şi a treia oară şi restul se dă deoparte, ca netrebuitori. Bobii – rândul întâi luat în lat, trebuie să cadă pe 5 sau pe 9. De cad pe 5, are să fie lucrul în grabă, de cad pe 9, mai sigur dar mai târziu. 3 fire, unde cad la un loc, e bucurie, 4 sunt vorbe, iar dacă cad 4 bobi la mijloc, în casă, însemnează că eşti plin la inimă, sătul, mulţămit. 2 fire însemnează îndoială sau supărare, 1 e în deşert sau veste, dacă e sus, iară dacă cade jos e bob sositor, are să sosească cineva. Dacă cad doi câte 1 sau trei în rând, înseamnă drum. 3 dacă cad sus, la mijloc, se numeşte bucurie în cap sau colac. De cad în rândul de jos, la mijloc, e colac în prag. Unul de cade jos în prag, în casă, e bob sositor şi astfeli e bine, se va împlini ce gândeşti, căci e drumul slobod, pe când, dacă sunt 4 jos, în prag, e drumul închis, nu se va împlini sau sunt vorbe. De-ţi cad în bobi lângă 4, 3 înseamnă vorbe de bucurie; de-ţi cad lângă 4, 1 şi 2 sunt vorbe de supărare şi deşarte. Când cad la mijloc, în curmeziş sau pe jos, tot câte 4 în rând, nu se va împlini ce gândeşti, e închis, sau vei avea proces [legal proceedings], huit [as in hue, of hue-and-cry]. În alte locuri spun că e bine, e masă. Dacă cad bobii câte 1 şi 2, atunci să n-ai nici o nădejde, că "e mai rar decât apa" [confessing to the practice's desert nomadic roots], nu e nimic în deplin. Dacă însă fiind aşa răi, numărând toţi bobii, găseşti că sunt 17, atunci totul va fi bine. Bobii din mâna stângă arată ce gând are persoana ce ai pus în gând, pentru tine. Dacă bobii din mâna stângă sunt tocmai aşa ca cei din dreapta, atunci eşti cu aceea gând la gând şi, prin aceasta, se arată dragostea. Bobul sositor din corn, din mâna dreaptă jos, e împlinirea gândului; de sunt mai mulţi bobi în corn, arată întârziere. Bobii ce rămân afară câte 4 se cheamă "mese", dacă sunt mesele cu soţ, e de bine.</blockquote> I am not kidding : if "scientific" means "confirmation" over "large datasets", nothing a coupla dozen "labs" managed over a few grosse silly gooses could possibly compare to the literal millennia of practice here available. <em>Datul in bobi</em> has way the fuck more "data behind it" than anything contemporary niggers could ever hope to achieve -- and the exact same sort of data, too. Can't say as I have, and to my detriment -- there's some serious comedy bricks in there. Then again... this procedure whereby spinsters and wankers self-importantly inquire with "large" (in the sense of, small) groups of campus ditzes what "their feelings" are on whatever X, then meticulously misinterpret the "results" towards whatever politruk pipe-dreams momentarily fashionable is in no danger of ever being mistaken for science. How's the ozone hole layers doing these days btw ?

In particular these idiots' procedures (and the related "Death Salience" morons, "measuring" word completion for chrissakes, did you complete sk___ with skull more often than with skill ? MEASURABLE!!!) come down to just so much repurposed Rorschach. Once that crystal ball crashed open (in ten thousand shards of laughter), the idiots did what they always do -- moved on. You wouldn't have expected them to learn something, which is to say change, I hope. They can't change, which is why they're where they are.

Old gypsy women (which is to say "clever", because hey, under such pressure as a demographic minority experiences in a nation-state, one'd better learn how to be clever) have an equally scientific scrying practice whereby... Actually, here, let me return the favour by publicizing some ethnographical notation of my own :

Bobii, se aleg 41 de fire de păpuşoi[corn], la nevoie şi altă sămânţă, cum e fasolea, şi se învârt grămăgioară[pile] cu mâna dreaptă în dreapta, cum merge soarele, descântând: "41 de bobi, bine ştiţi, bine gâciţi[guess], dacă va fi să fie bine şi după gând, să pice pe 9, colac în prag şi bucurie într-amândouă mâinile, dar dacă nu, să pice unul şi doi şi să nu se aleagă nimică din voi!" [including the non-tenure threat, fancy that wonder, it's older than dirt!]

Apoi face cruce cu mâna pe grăunţe şi le desparte în trei grămăgioare. O grămăgioară vine de-a stânga, acolo se cheamă "în mână străină", alta în mijloc, se numeşte "în casă", şi a treia în dreapta, aceasta se numeşte "în mâna sa", a celui ce caută.

Din aceste grămăgioare, se numără – începând de la cea din mijloc – câte patru bobi şi tot se dau deoparte, iar cei ce rămân, fie 1, 2, 3 sau 4, se pun în linie sus, făcându-se tot astfeli şi cu celălalte două grămăgioare. Iară bobii rămaşi se amestecă şi iarăşi se face un rând de bobi gâcitori, puindu-se sub rândul de sus. Tot aşa se face şi a treia oară şi restul se dă deoparte, ca netrebuitori.

Bobii – rândul întâi luat în lat, trebuie să cadă pe 5 sau pe 9. De cad pe 5, are să fie lucrul în grabă, de cad pe 9, mai sigur dar mai târziu. 3 fire, unde cad la un loc, e bucurie, 4 sunt vorbe, iar dacă cad 4 bobi la mijloc, în casă, însemnează că eşti plin la inimă, sătul, mulţămit. 2 fire însemnează îndoială sau supărare, 1 e în deşert sau veste, dacă e sus, iară dacă cade jos e bob sositor, are să sosească cineva.

Dacă cad doi câte 1 sau trei în rând, înseamnă drum. 3 dacă cad sus, la mijloc, se numeşte bucurie în cap sau colac. De cad în rândul de jos, la mijloc, e colac în prag. Unul de cade jos în prag, în casă, e bob sositor şi astfeli e bine, se va împlini ce gândeşti, căci e drumul slobod, pe când, dacă sunt 4 jos, în prag, e drumul închis, nu se va împlini sau sunt vorbe.

De-ţi cad în bobi lângă 4, 3 înseamnă vorbe de bucurie; de-ţi cad lângă 4, 1 şi 2 sunt vorbe de supărare şi deşarte. Când cad la mijloc, în curmeziş sau pe jos, tot câte 4 în rând, nu se va împlini ce gândeşti, e închis, sau vei avea proces [legal proceedings], huit [as in hue, of hue-and-cry]. În alte locuri spun că e bine, e masă.

Dacă cad bobii câte 1 şi 2, atunci să n-ai nici o nădejde, că "e mai rar decât apa" [confessing to the practice's desert nomadic roots], nu e nimic în deplin. Dacă însă fiind aşa răi, numărând toţi bobii, găseşti că sunt 17, atunci totul va fi bine. Bobii din mâna stângă arată ce gând are persoana ce ai pus în gând, pentru tine.

Dacă bobii din mâna stângă sunt tocmai aşa ca cei din dreapta, atunci eşti cu aceea gând la gând şi, prin aceasta, se arată dragostea. Bobul sositor din corn, din mâna dreaptă jos, e împlinirea gândului; de sunt mai mulţi bobi în corn, arată întârziere. Bobii ce rămân afară câte 4 se cheamă "mese", dacă sunt mesele cu soţ, e de bine.

I am not kidding : if "scientific" means "confirmation" over "large datasets", nothing a coupla dozen "labs" managed over a few grosse silly gooses could possibly compare to the literal millennia of practice here available. Datul in bobi has way the fuck more "data behind it" than anything contemporary niggers could ever hope to achieve -- and the exact same sort of data, too.

]]>
By: TLP http://trilema.com/2014/how-to-deal-with-pseudoscience/#comment-164285 TLP 2021年5月29日 14:24:13 +0000 http://trilema.com/?p=57508#comment-164285 Speaking of your latest article, have you ever heard of <a href="https://www.cos.io/blog/many-labs-4-failure-replicate-mortality-salience-effect-and-without-original-author-involvement">COS</a>? <blockquote>We present results from Many Labs 4, which was designed to investigate whether contact with original authors and other experts improved replication rates for a complex psychological paradigm. However, the project is largely uninformative on that point as, instead, we were unable to replicate the effect of mortality salience on worldview defense under any conditions. Recent efforts to replicate findings in psychology have been disappointing. There is a general concern among many in the field that a large number of these null replications are because the original findings are false positives, the result of misinterpreting random noise in data as a true pattern or effect. But, failures to replicate are inherently ambiguous and can result from any number of contextual or procedural factors. Aside from the possibility that the original is a false positive, it may instead be the case that some aspect of the original procedure does not generalize to other contexts or populations, or the procedure may have produced an effect at one point in time but those conditions no longer exist. Or, the phenomena may not be sufficiently understood so as to predict when it will and will not occur (the so-called "hidden moderators" explanation). Another explanation — often made informally — is that replicators simply lack the necessary expertise to conduct the replication properly. Maybe they botch the implementation of the study or miss critical theoretical considerations that, if corrected, would have led to successful replication. The current study was designed to test this question of researcher expertise by comparing results generated from a research protocol developed in consultation with the original authors to results generated from research protocols designed by replicators with little or no particular expertise in the specific research area. This study is the fourth in our line of "Many Labs" projects, in which we replicate the same findings across many labs around the world to investigate some aspect of replicability. To look at the effects of original author involvement on replication, we first had to identify a target finding to replicate. Our goal was a finding that was likely to be generally replicable, but that might have substantial variation in replicability due to procedural details (e.g. a finding with strong support but that is thought to require "tricks of the trade" that non-experts might not know about). Most importantly, we had to find key authors or known experts who were willing to help us develop the materials. These goals often conflicted with one another. We ultimately settled on Terror Management Theory (TMT) as a focus for our efforts. TMT broadly states that a major problem for humans is that we are aware of the inevitability of our own death; thus, we have built-in psychological mechanisms to shield us from being preoccupied with this thought. In consultation with those experts most associated with TMT, we chose Study 1 of Greenberg et al. (1994) for replication. The key finding was that, compared to a control group, U.S. participants who reflected on their own death were higher in worldview defense; that is, they reported a greater preference for an essay writer adopting a pro-U.S. argument than an essay writer adopting an anti-U.S. argument. We recruited 21 labs across the U.S. to participate in the project. A randomly assigned half of these labs were told which study to replicate, but were prohibited from seeking expert advice ("In House" labs). The remaining half of the labs all followed a set procedure based on the original article, and incorporating modifications, advice, and informal tips gleaned from extensive back-and-forth with multiple original authors ("Author Advised" labs).* In all, the labs collected data from 2,200+ participants. The goal was to compare the results from labs designing their own replication, essentially from scratch using the published method section, with the labs benefitting from expert guidance. One might expect that the latter labs would have a greater likelihood of replicating the mortality salience effect, or would yield larger effect sizes. However, contrary to our expectation, we found no differences between the In House and Author Advised labs because neither group successfully replicated the mortality salience effect. Across confirmatory and exploratory analyses we found little to no support for the effect of mortality salience on worldview defense at all. In many respects, this was the worst possible outcome — if there is no effect then we can't really test the metascientific questions about researcher expertise that inspired the project in the first place. Instead, this project ends up being a meaningful datapoint for TMT itself. Despite our best efforts, and a high-powered, multi-lab investigation, we were unable to demonstrate an effect of mortality salience on worldview defense in a highly prototypical TMT design. This does not mean that the effect is not real, but it certainly raises doubts about the robustness of the effect. An ironic possibility is that our methods did not successfully capture the exact fine-grained expertise that we were trying to investigate. However, that itself would be an important finding — ideally, a researcher should be able to replicate a paradigm solely based on information provided in the article or other readily available sources. So, the fact that we were unable to do so despite consulting with original authors and enlisting 21 labs, all of which were highly trained in psychology methods is problematic. From our perspective, a convincing demonstration of basic mortality salience effects is now necessary to have confidence in this area moving forward. It is indeed possible that mortality salience only influences worldview defense during certain political climates or among catastrophic events (e.g. national terrorist attacks), or other factors explain this failed replication. A robust Registered Report-style study, where outcomes are predicted and analyses are specified in advance, would serve as a critical orienting datapoint to allow these questions to be explored. Ultimately, because we failed to replicate the mortality salience effect, we cannot speak to whether (or the degree to which) original author involvement improves replication attempts.** Replication is a necessary but messy part of the scientific process, and as psychologists continue replication efforts it remains critical to understand the factors that influence replication success. And, it remains critical to question, and empirically test, our intuitions and assumptions about what might matter. *At various points we refer to "original authors". We had extensive communication with several authors of the Greenberg et al., 1994 piece, and others who have published TMT studies. However, that does not mean that all original authors endorsed each of these choices, or still agree with them today. We don’t want to put words in anyone’s mouth, and, indeed, at least one original author expressly indicated that they would not run the study given the timing of the data collection — September 2016 to May 2017, the period leading up to and following the election of Donald Trump as President of the United States. We took steps to address that concern, but none of this means the original authors "endorse" the work.</blockquote> Speaking of your latest article, have you ever heard of COS?

We present results from Many Labs 4, which was designed to investigate whether contact with original authors and other experts improved replication rates for a complex psychological paradigm. However, the project is largely uninformative on that point as, instead, we were unable to replicate the effect of mortality salience on worldview defense under any conditions.

Recent efforts to replicate findings in psychology have been disappointing. There is a general concern among many in the field that a large number of these null replications are because the original findings are false positives, the result of misinterpreting random noise in data as a true pattern or effect.

But, failures to replicate are inherently ambiguous and can result from any number of contextual or procedural factors. Aside from the possibility that the original is a false positive, it may instead be the case that some aspect of the original procedure does not generalize to other contexts or populations, or the procedure may have produced an effect at one point in time but those conditions no longer exist. Or, the phenomena may not be sufficiently understood so as to predict when it will and will not occur (the so-called "hidden moderators" explanation).

Another explanation — often made informally — is that replicators simply lack the necessary expertise to conduct the replication properly. Maybe they botch the implementation of the study or miss critical theoretical considerations that, if corrected, would have led to successful replication. The current study was designed to test this question of researcher expertise by comparing results generated from a research protocol developed in consultation with the original authors to results generated from research protocols designed by replicators with little or no particular expertise in the specific research area. This study is the fourth in our line of "Many Labs" projects, in which we replicate the same findings across many labs around the world to investigate some aspect of replicability.

To look at the effects of original author involvement on replication, we first had to identify a target finding to replicate. Our goal was a finding that was likely to be generally replicable, but that might have substantial variation in replicability due to procedural details (e.g. a finding with strong support but that is thought to require "tricks of the trade" that non-experts might not know about). Most importantly, we had to find key authors or known experts who were willing to help us develop the materials. These goals often conflicted with one another.

We ultimately settled on Terror Management Theory (TMT) as a focus for our efforts. TMT broadly states that a major problem for humans is that we are aware of the inevitability of our own death; thus, we have built-in psychological mechanisms to shield us from being preoccupied with this thought. In consultation with those experts most associated with TMT, we chose Study 1 of Greenberg et al. (1994) for replication. The key finding was that, compared to a control group, U.S. participants who reflected on their own death were higher in worldview defense; that is, they reported a greater preference for an essay writer adopting a pro-U.S. argument than an essay writer adopting an anti-U.S. argument.

We recruited 21 labs across the U.S. to participate in the project. A randomly assigned half of these labs were told which study to replicate, but were prohibited from seeking expert advice ("In House" labs). The remaining half of the labs all followed a set procedure based on the original article, and incorporating modifications, advice, and informal tips gleaned from extensive back-and-forth with multiple original authors ("Author Advised" labs).* In all, the labs collected data from 2,200+ participants.

The goal was to compare the results from labs designing their own replication, essentially from scratch using the published method section, with the labs benefitting from expert guidance. One might expect that the latter labs would have a greater likelihood of replicating the mortality salience effect, or would yield larger effect sizes. However, contrary to our expectation, we found no differences between the In House and Author Advised labs because neither group successfully replicated the mortality salience effect. Across confirmatory and exploratory analyses we found little to no support for the effect of mortality salience on worldview defense at all.

In many respects, this was the worst possible outcome — if there is no effect then we can't really test the metascientific questions about researcher expertise that inspired the project in the first place. Instead, this project ends up being a meaningful datapoint for TMT itself. Despite our best efforts, and a high-powered, multi-lab investigation, we were unable to demonstrate an effect of mortality salience on worldview defense in a highly prototypical TMT design. This does not mean that the effect is not real, but it certainly raises doubts about the robustness of the effect. An ironic possibility is that our methods did not successfully capture the exact fine-grained expertise that we were trying to investigate. However, that itself would be an important finding — ideally, a researcher should be able to replicate a paradigm solely based on information provided in the article or other readily available sources. So, the fact that we were unable to do so despite consulting with original authors and enlisting 21 labs, all of which were highly trained in psychology methods is problematic.

From our perspective, a convincing demonstration of basic mortality salience effects is now necessary to have confidence in this area moving forward. It is indeed possible that mortality salience only influences worldview defense during certain political climates or among catastrophic events (e.g. national terrorist attacks), or other factors explain this failed replication. A robust Registered Report-style study, where outcomes are predicted and analyses are specified in advance, would serve as a critical orienting datapoint to allow these questions to be explored.

Ultimately, because we failed to replicate the mortality salience effect, we cannot speak to whether (or the degree to which) original author involvement improves replication attempts.** Replication is a necessary but messy part of the scientific process, and as psychologists continue replication efforts it remains critical to understand the factors that influence replication success. And, it remains critical to question, and empirically test, our intuitions and assumptions about what might matter.

*At various points we refer to "original authors". We had extensive communication with several authors of the Greenberg et al., 1994 piece, and others who have published TMT studies. However, that does not mean that all original authors endorsed each of these choices, or still agree with them today. We don’t want to put words in anyone’s mouth, and, indeed, at least one original author expressly indicated that they would not run the study given the timing of the data collection — September 2016 to May 2017, the period leading up to and following the election of Donald Trump as President of the United States. We took steps to address that concern, but none of this means the original authors "endorse" the work.

]]>
By: thelastpsychiatrist.com - The Psychological Uncertainty Principle. Adnotated. on Trilema - A blog by Mircea Popescu. http://trilema.com/2014/how-to-deal-with-pseudoscience/#comment-153640 thelastpsychiatrist.com - The Psychological Uncertainty Principle. Adnotated. on Trilema - A blog by Mircea Popescu. 2020年8月13日 19:52:49 +0000 http://trilema.com/?p=57508#comment-153640 [...] before. All other nonsense purporting to misrepresent itself as "judgement" is by that very fact invalid. [↩]The postmodern comicity of this nonsense. Trinity said it best, you know ? Epic. [...] [...] before. All other nonsense purporting to misrepresent itself as "judgement" is by that very fact invalid. [↩]The postmodern comicity of this nonsense. Trinity said it best, you know ? Epic. [...]

]]>
By: The last blog on Trilema - A blog by Mircea Popescu. http://trilema.com/2014/how-to-deal-with-pseudoscience/#comment-153008 The last blog on Trilema - A blog by Mircea Popescu. 2020年7月24日 01:39:20 +0000 http://trilema.com/?p=57508#comment-153008 [...] otherwise, in any sort of grounded retelling, the intellectual value of this sad wank is exactly the same nil. As you do not know but could at great expense find out, there were very intricate discussions on [...] [...] otherwise, in any sort of grounded retelling, the intellectual value of this sad wank is exactly the same nil. As you do not know but could at great expense find out, there were very intricate discussions on [...]

]]>

AltStyle によって変換されたページ (->オリジナル) /