Negative Posts Garner 27% Higher Scores on Hacker News
A recent empirical study of 32,000 Hacker News posts reveals that negative content achieves an average of 35.6 points, about 27% higher than the overall mean of 28 points. Using six distinct sentiment classifiers—including transformer models and large‑language models—the negativity bias persists across all methods. The findings, which differentiate substantive criticism from personal attacks, are slated for release along with code, datasets, and a live dashboard.
Recent studies of attention dynamics on Hacker News (HN) have highlighted a striking bias toward negative content. An analysis of 32,000 posts and 340,000 comments shows that posts classified as negative receive an average score of 35.6 points, compared with a global average of 28 points—yielding a 27 % performance premium.
The data underpinning this conclusion come from a longitudinal empirical investigation that explored decay curves, preferential attachment, survival probability, and early‑engagement prediction across the HN archive. The preprint detailing the methodology is available on SSRN, and the research has been replicated using a suite of six sentiment‑analysis models to guard against classifier bias.
The models employed include three transformer‑based classifiers—DistilBERT, BERT Multilingual, and RoBERTa—alongside three large language models: Llama 3.1 8B, Mistral 3.1 24B, and Gemma 3 12B. While the predicted distributions vary slightly, every method demonstrates a consistent negative skew. In practice, the DistilBERT model was selected for production deployment, balancing computational efficiency with robust performance in a Cloudflare‑backed pipeline.
The notion of “negative” sentiment in this context encompasses constructive criticism of technology, skepticism toward industry announcements, complaints about prevailing practices, and expressions of frustration with APIs. These are distinct from personal harassment; the majority of HN negativity manifests as substantive technical critique.
One key question remains: does negativity drive engagement, or do controversial topics attract both criticism and attention? The evidence suggests a synergistic effect, with negative framing amplifying visibility for contentious subjects.
The complete codebase, annotated dataset, and an interactive dashboard will be made publicly available in an upcoming release. Subscribers can stay abreast of updates via the project’s RSS feed or through direct notifications on Bluesky.
This research underscores the nuanced relationship between sentiment and social media metrics, offering valuable insights for community moderators, platform designers, and researchers examining online discourse dynamics.