“A technological singularity is a fundamental change in the nature of human civilization, technology, and application of intelligence, to the point where the state of the culture is completely unpredictable to humans existing prior to the change.” —Siri
There has been and there is still a lot of heated debate, with partisans lined up on both sides, and many competing claims surrounding singularity: what it means, when it might happen, and what the implications are.
My view is that we will reach a “crossover” point of human versus computer intelligence.
But first, I’d like to step back and explain how I frame this and why I don’t think anyone can say with certainty when that point might be. I definitely don’t believe the “it’s right around the corner” crowd. It also seems simplistic to try to pin down a date: whether that’s Kurzweil’s 2045 prediction or some other fixed date in the future .
Just as step changes in innovation, or simultaneous discovery, very often emerge independently in more than one location before hitting an inflection point at a specific locus of time and place that carries them to the masses, so trying to specify a fixed date to reach singularity, seems incorrect to me. Actually, the concept of singularity itself followed this path: first popping up in a couple of references, then taking hold decades later after being popularized by computer scientist and writer Vernor Vinge.
It’s more likely that technological change will develop on a continuum and we will all look up one day at some future point and notice that the singularity has already arrived. That said, here are my difficulties with the reasoning underlying some of the predictions:
Moore’s Law is cited as “proof” that singularity will occur and quickly, but…
- What reason do we have to believe that Moore’s Law doesn’t have a point at which, to put it in economic terms, it reaches “diminishing returns?” Why should we assume it is constant and immutable?
- And does Moore’s Law, observed as it has been for computer hardware, have any relevance for the multiple fields — scientific, biology, software — to which it would need to apply to have any predictive value for AI development and singularity? Paul Allen’s piece, in which he references “the complexity brake,” is a good reminder that certainly in the scientific field, advances occur in fits and starts, not at a uniform predictable rate.
- What weight are people giving to how exogenous, unforeseen events might affect things?
- What other “law(s)” might be key?
- Is technological development the only factor we need to consider when making predictions for and thinking about singularity?
How does this impact startups and entrepreneurship?
Planning for and moving constructively toward the singularity will require both incremental and exponential advancements in multiple fields. On the technological front, we need some big wins analogous to what occurred with the rise of semiconductors, the Internet, personal computers and smartphones.
The possibility for big breakthroughs and the deep benches of intense multidisciplinary teams needed to affect change will require human and financial capital on a large-scale and the participation of entrepreneurial minds drawn from private, public, and government talent pools. This is already happening, but again it’s impossible to draw neat extrapolations of timeframes.
David Ehrenberg is the founder and CEO of Early Growth Financial Services, an outsourced financial services firm that provides startup founders and early-stage companies with accounting, finance, tax, valuation, and corporate governance services and support. He’s a financial expert and startup mentor, whose passion is helping businesses focus on what they do best. Follow David @EarlyGrowthFS.
What factors contributed to the success of your last funding round? Tell us in the comments section below or contact Early Growth Financial Services for financial and accounting support.
Liked this post? Join our mailing list to get our posts on startups sent directly to your inbox.