Recently, we've seen explosive growth in this field and are getting hints of the widespread effects these technologies can have on seemingly every human endeavor. As I've watched the changes in how AI is being implemented, I've come to the conclusion that the companies that become AI-first now -- by bringing that expertise into their executive suites and weaving the knowledge throughout their organizations -- will have strong advantages over their competitors.
A Brief Word about Terminology
The terms "AI" and "machine learning" are often used interchangeably. Here are my basic definitions:
- AI (Artificial Intelligence) is a general category of technologies and techniques with the common goal of imparting some level of human-like "thinking" (such as prediction, recognition/classification, and the ability to "learn") to computers. Today, we are still a very long way from human like thinking and consciousness.
- Machine learning is a sub-category of AI. It refers to a machine or program that can learn from large datasets versus an alternative approach to write human encoded rules to solve a problem. Deep learning, a newer terminology, is a machine learning technique based on earlier neural network systems.
Why Now: A Perfect Storm of Algorithms, Computing Resources, and Education
Many assume that AI is a new phenomenon. In fact, AI was first proposed as a research goal in the 1950s, but it's taken until now for the right combination of factors to make truly usable AI a reality: better algorithms; faster, cheaper, and more available computing resources; and education.
The previous surge in AI research occurred in the 1980s. I was fortunate enough to enter the field at this time, by studying speech recognition at Cambridge university, followed by work on neural networks at Stanford University, and also by working on the first portable computers with commercial speech recognition at Apricot Computers.
Prior to the 1980s, AI technologies consisted largely of expert system approaches, in which humans encoded rules in programs to solve problems. In contrast, in the 1980s saw the advent of practical algorithms that learned from the data. There were a variety of statistical approaches, and we also saw the first wave of neural computational models applied to real commercial problems, inspired by brain architectures, consisted of computational "neurons" arranged in layers. The neurons performed simple computations and sent their weighted results to other neurons; a learning technique called the back propagation algorithm was shown how to practically train the weights to solve real problems. Architectures at that time consisted of just a few thousand parameters and neurons, arranged in only a few layers.
I've come to the conclusion that the companies that become AI-first now will have strong advantages over their competitors.
Over the decades, AI slowly entered into mainstream use -- telephone banking, airline booking, loan scoring, robotics, with early versions often heavily criticized but over time getting better, and better. The latest implementations have billions of parameters and hundreds of neuron layers (the "deep" in deep learning) and we are able to train on a lot more available data. However, notwithstanding the great press given to deep learning, many practical commercial systems use a variety of approaches often in hybrid systems.
A big advantage of the latest techniques is that they can de-skill the domain specialist and transfer the power to AI deployment. There are many examples where after a few months of collecting all the data and "just" submitting it to deep learning algorithms, we can outperform a decade of hand engineering.
The evolving AI algorithms couldn't have been created, or been adequately tested and implemented, without the accelerated improvements in computing resources over the past three decades.
Processors, memory, and storage are now cheaper, and more available than they've ever been. Our compute power is now tens of thousands of times faster, compared to 1984. If we can't put enough compute power locally in the device, we have access to practically unlimited storage and compute power in the cloud. New optimized architectures for these new algorithmic approaches have arrived - parallel computing architectures -- there is one example at Manchester university which has 500,000 ARM chips connected together; and now a popular technique is to re-purpose graphical processing units (GPUs) for computational purposes.
It used to be the case that one would have to undertake a Ph.D. program in the field and practice writing the code from scratch. Now it seems like every computer science, electrical engineering, statistics, and even biology department has AI courses at the undergraduate level. These days there are numerous open source tool kits and hardware platforms to choose from. Examples include Tensorflow
from Google, Keras
An AI-First Response
A big advantage of the latest techniques is that they can de-skill the domain specialist and transfer the power to AI deployment.
My thesis is that early companies who deploy AI will leapfrog their competition by taking an "AI first" approach (this reminds me of a set of companies who 10 years ago made a strategic decision to be "mobile first" early and are now reaping the rewards.) I think there are very few truly integrated AI-first companies who use AI in both products and how they run their operations.
Of course, the VC community is rapidly funding AI startups to supply consumers and organizations. I am personally involved with several companies using AI. Here are a few of them; my list illustrates the wide variety of AI-applicable industries and use cases:
It's time for the Chief AI Officer
As different disciplines learn more about AI technologies, that knowledge will spread organically throughout a company. However, making an enterprise truly AI-centric is a directive that needs to come from top management. I've seen that expertise located in and disseminated from the offices of CEOs, CIOs, CTOs, and even the occasional Chief Algorithms Officers. None of these capture the role exactly to make an organization truly "AI first".
What is needed that's different? A Chief AI Officer needs to provide a cross-sectional experience and input on how to insert AI in a company's products and how to affect all aspects of a company's organization. She also needs to sort the wheat from the chaff, from reality to mere hope -- these days every startup's pitch deck now has the term "deep learning" with many claiming the holy grail to true human intelligence in software. She needs to filter these claims and be able to determine:
- Where can AI be used in the organization to make the organization more efficient
- What products can AI be incorporated into - new and old products.
- What are the latest AI techniques, how do they work and what are their limitations
- Once products, processes and techniques have been identified how can they be introduced into the organization
- What tools and hardware investments need to be made
- What training needs to be provide to the ENTIRE organization -- how to address worries if these machines will replace jobs
- How to measure ROI of all implementations and iterate positive and negative learnings into future deployments
- Shepherding and communicating AI ethics into the organizations
In addition to the leader-manager tasks listed above, this person or department should be multi-disciplinary, and include the skillsets of entrepreneurship, product development, mathematics, engineering, computer science and ethical analysis.
To become AI-First, with an associated enhanced stock price, companies need to get serious and appoint responsibilities -- if not taken by the CEO itself, then by appointing a Chief AI Officer!
Otherwise, don't be surprised if your competitors leave you in the rear-view mirrors of their autonomous vehicles.