You cannot possibly trust someone on a treadmill.
I blurted out: “Isn’t this unethical?” I was then told in response “It is perfectly legal,” adding the even more incriminating “we have plenty of former regulators on the staff,” (a) implying that what was legal was ethical and (b) asserting that former regulators have an edge over citizens.
Think about it a bit further: the more complex the regulation, the more bureaucratic the network, the more a regulator who knows the loops and glitches would benefit from it later, as his regulator edge would be a convex function of his differential knowledge. This is a franchise, an asymmetry one has at the expense of others. (Note that this franchise is spread across the economy; the car company Toyota hired former U.S. regulators and used their “expertise” to handle investigations of its car defects.)
First, the more complicated the regulation, the more prone to arbitrages by insiders. This is another argument in favor of heuristics. Twenty-three hundred pages of regulation—something I can replace with Hammurabi’s rule—will be a gold mine for former regulators. The incentive of a regulator is to have complex regulation. Again, the insiders are the enemies of the less-is-more rule. Second, the difference between the letter and the spirit of regulation is harder to detect in a complex system. The point is technical, but complex environments with nonlinearities are easier to game than linear ones with a small number of variables. The same applies to the gap between the legal and the ethical. Third, in African countries, government officials get explicit bribes. In the United States they have the implicit, never mentioned, promise to go work for a bank at a later date with a sinecure offering, say $5 million a year, if they are seen favorably by the industry. And the “regulations” of such activities are easily skirted.
The researcher gets the upside, truth gets the downside. The researcher’s free option is in his ability to pick whatever statistics can confirm his belief—or show a good result—and ditch the rest. He has the option to stop once he has the right result.
Even experiments can be marred with bias: the researcher has the incentive to select the experiment that corresponds to what he was looking for, hiding the failed attempts. He can also formulate a hypothesis after the results of the experiment—thus fitting the hypothesis to the experiment. The bias is smaller, though, than in the previous case. The fooled-by-data effect is accelerating. There is a nasty phenomenon called “Big Data” in which researchers have brought cherry-picking to an industrial level. Modernity provides too many variables (but too little data per variable), and the spurious relationships grow much, much faster than real information, as noise is convex and information is concave.
The tragedy is that it is very hard to get funding to replicate—and reject—existing studies. And even if there were money for it, it would be hard to find takers: trying to replicate studies will not make anyone a hero.
Departments need to teach something so students get jobs, even if they are teaching snake oil—this got us trapped in a circular system in which everyone knows that the material is wrong but nobody is free enough or has enough courage to do anything about it.