神刀安全网

Analyzing 10,000 Show HN Submissions

Related Data and Charts

Show HN Submissions:

Show HN Makers:

The Best "Show HN" Projects

I manually checked the status of the top 100 Show HN submissions. Eleven of them happened to be dead. The others still deliver what they promised in the original submission:

Analyzing 10,000 Show HN Submissions

The connection between age and death is not evident here, but it reveals itself in a larger sample, as we’ll see later.

Top Makers

The top 10 users ranked by total scores in Show HN submissions:

Author   N sum(score) ----------------------------------- olalonde  7 3589 negrit   9 1472 vicapow   9 1464 geoffschmidt  1 1386 timmorgan  4 1318 gkoberger  5 1306 dang   1 1302 omegaworks  1 1215 hannahmitt  1 1172 afshinmeh  25 1080

Top makers have few projects. In Top 100, the mode is one project. Only few authors, like afshinmeh , have more than a dozen. Even in these cases, most scores come from a single submission.

The best examples of systematic success are ianstormtaylor and erohead : they have a high median score in six and three projects, respectively.

Commercial Success

To explore commercial success of Show HN projects, I combined submissions with the CrunchBase data. I joined both datasets by network location. Despite obvious problems with overlapping second-level domains (github.com hosts many Show HN projects, but belongs to a different entity in CrunchBase), most HN projects have individual domains and, therefore, match uniquely.

Conclusions:

  • 500 out of 10,000 top-rated Show HNs appear in the CrunchBase dataset.
  • $3.5 bn. of the total funding for matched projects, with some error due to matching process.
  • I found no statistical connection between HN scores and funding.
  • 72% of the matched projects reside in the United States. The United Kingdom and Canada contribute 10%. The other 193 countries deliver the rest 18% of the projects.
  • 7% of the projects are now defunct, according to CB. This is a relatively high survival rate.

These stats are impressive, but they describe the best projects. What about the rest?

The Average "Show HN" Project

The front page appearances make a small fraction of the submissions. The median Show HN project looks modest:

Analyzing 10,000 Show HN Submissions

Projects earned these scores over one-two days after submission. How are they doing how?

I took a random sample of 10,000 Show HN submissions and sent a request to each associated URL. The servers returned status codes, which I used for dividing Show HN projects into living and dead. I interpreted code 200 as saying that the project still exists and all other codes (301, 404, timeout, and others) indicating a defunct project.

You can think of this status as indicating relevance. Relevant projects persist and fads disappear.

In this random sample, servers returned these codes:

Analyzing 10,000 Show HN Submissions

Description for each code is available here

Does Hacker News Predict Success?

Users judge these projects early, and success on Hacker News may help developers decide what to do next with their project. Sometimes they succeed. For example, Stripe and Dropbox started with Show HN. So how well does the Hacker News community detect future hits?

I regressed the current status (alive or dead) on relevant variables:

Analyzing 10,000 Show HN Submissions

Translating into English:

  • The initial reception (score) does reflect the quality of the project, its survival rate.
  • More comments follow weaker projects.
  • Projects posted on weekends aren’t better. In fact, the survival rate is flat across hours and days of the week when the project was submitted to HN.
  • Projects hosted by GitHub live longer. GitHub maintains old repos for free, and the death rate is naturally smaller. I included this variable to control for this bias. Other free web services (youtube.com, blogspot.com) host few projects, and I ignored them.
  • The average death rate is around 9% a year. After six years, a project is more likely to be dead than alive.

These results hold across different specifications and estimation procedures (of which I included only the basic model). So a Hacker News reception has good predictive power, estimated around the OLS’s R² = 14%.

Comments

  • What about cases when error codes happen due to success, like redirect to a new domain? They don’t affect the results. Regression of redirects on the same variables yields insignificant coefficients. The same for other error codes.
  • Here and there, natural logs helped adjust scores and comments for self-reinforcement effects. Meaning, when a project hits the front page, its score and comments grow faster than justified by the project’s quality. Logs reduce this effect for better estimates.

转载本站任何文章请注明:转载至神刀安全网,谢谢神刀安全网 » Analyzing 10,000 Show HN Submissions

分享到:更多 ()

评论 抢沙发

  • 昵称 (必填)
  • 邮箱 (必填)
  • 网址