While the press has been particularly abuzz with AI in the past couple of years, the concept and idea of artificial intelligence has been around for more than half a century. More recently, the “AI” label has been slapped on a variety of technologies, including many that merely perform routine tasks. The increase in their speed and accuracy has led to the impression that machines are getting more intelligent - with some vendors even boldly conveying the images of a humanoid robot that is almost all-knowing.
But here’s the catch: most robots aren’t actually getting smarter. Intelligence involves a machine’s ability to take actions in pursuit of a goal, an ability that most “artificial intelligence” algorithms still don’t have. While models have certainly become more capable - largely a result of increased computing power allowing brute-force analysis of staggering volumes of data - they remain susceptible to the same pitfalls as algorithms of the past. The accuracy of these AI techniques remains largely tied to the quality of data and the rules humans define, making them as susceptible as ever to error -- but making detection of such errors increasingly more difficult due to complexity and abstraction.
Calling an algorithm “AI” does not make it smarter, just as putting a white lab coat on a random person does not make them a doctor or a scientist. When a supposedly- intelligent machine is given bad rules or inadequate data, it can and will produce flawed results. In such cases, it’s not artificial intelligence you’re dealing with; it’s artificial stupidity.
While technology vendors should be more transparent about the tangible benefits of their products, business leaders should also exercise healthy skepticism in the extent of AI’s current and near term capabilities, before embarking on a mega-dollar, multi-year quest.