The data on online abuse is sobering: Nearly one in three teens have been cyberbullied and one in five women have experienced misogynistic abuse online. Overall, some 40% of all internet users have faced some form of online harassment. Why have online communities failed so dramatically to protect their users? An analysis of 18 years of data on user behavior and its moderation reveals that the failure stems from the fact that people responsible for moderating online behavior labor under five misconceptions about toxicity, specifically that people experiencing abuse will leave, that the incidence of abuse are isolated and independent, that abuse is not an inherent part of community culture, that rivalries in communities are beneficial, and that self-moderation can and does prevent abuse. These misconceptions drive current moderation practices. In each case, the authors present findings that both debunk the myths and point to more effective ways of managing toxic online behavior.