Artificial Intelligence: Past, Present & Future

| | Technology & Integration

FacebookTwitterGoogle+LinkedInEmail
Todd AI Article Oct 2017
The Past

While the idea of artificial intelligence (AI) dates back to stories and myths of ancient history, its modern popularity began in the late 1800s and early 1900s, when modern science fiction started gaining popularity. One of the earliest examples, Erewhon (1872) by Samuel Butler, speculates on intelligent machines and their use to society. In the book, machines are banned in the mythical city of Erewhon because they are viewed as a threat to humans. In the years following Erewhon, science fiction stories became more elaborate, giving us thinking robots & machines, each a little more fantastic.

In 1956, John McCarthy, an Assistant Professor of Mathematics at Dartmouth College, and Claude Shannon, head of the Information Theory group at Bell Labs, organized a 6-week brainstorming session with seven of their peers, to discuss the evolution of computers and neural networks. The idea was how to create a computer that could think or engage in common sense human problem solving. Over the subsequent years, McCarthy would become known as the father of artificial intelligence, and millions of dollars were pumped into this research.  However, computer hardware was not advanced enough, and funding dried up in 1973.

After a brief resurgence in AI interest in the early 1980’s, research went dormant again in the 1990s, again due to the lack of capable computer hardware. People had to rethink how to get to artificial intelligence and to rethink what it was.

The Present

Fast forward to today. Advances in computing hardware are finally starting to approach the capabilities necessary to make artificial intelligence a reality. While we do not really have sentient computers, in terms of machines capable of supporting consciousness, we do have AI in the form of automated processing and decision-making. We have programs that are capable of analyzing huge amounts of data and making decisions based on that analysis. Computers are innately qualified to handle jobs that involve answering questions or making decisions based on quantitative analysis.

Language & communication are perhaps the biggest obstacles to the kind of AI we are trying to discover. If a truly thinking AI is created, we want it to be able to communicate with us, since its purpose is to assist us. There have been advances in this field. Look at the chatbots that many web sites now use to help users answer questions. They are still fairly simplistic at this stage of their evolution, but it may only be a matter of time before they’re able pass the Turing test. The Turing test, developed by scientist Alan Turing in the 1950s, asks if a machine is able to carry on a conversation without you knowing if they are a human or machine.

We can even see some forms of AI already in existence is in utilization review. Here we see vast databases of workflow rules, produced with the help of millions of historical data points. Although there is no substitute for the judgement of a utilization review nurse or reviewing physician, certain aspects of treatment requests can now be analyzed, decisions made, and notifications sent out, all with minimal human intervention. Decisions can be made about when a human needs to be involved (and who to involve), in unusual cases. This is more than just automation. The level of analysis required elevates it to a level that has the look and feel of artificial intelligence.

The Future

The biggest player in AI these days is IBM’s Watson. After a brief publicity tour, IBM is training this AI to understand medical charts (especially in the area of Oncology), recommend treatment guidelines, and reduce research time for doctors. They’re also opening up public access to allow third parties to interface and create their own intelligent assistants. This is not a new model, but it could be a game changer for many companies.

Expanding functionality using internet web services is not a new concept, but the new design mantra of “micro architecture” is the fruition of the kind of network communication design that will eventually bring us true artificial intelligence. The basic idea is that computer software is not one big monolith or program, written by a dedicated team of people. Micro architecture implies that you create the software that you do best, then make it available for other people to use. Alternatively, you could create software that brings together many different pieces to create something new, which in turn, can become a building block for something else.

In utilization review, and insurance in general, artificial intelligence can help reduce costs, streamline process, decrease errors, and make faster decisions about medical treatments. Another behind-the-scenes benefit is recognizing fraud early and preventing its drain on resources. As our industry begins to transform and embrace these technologies, I start to think, “Where does it end?” Are we destined to have all our medical care controlled by machines? The bottom line is that, even if it sometimes feels strange, machines embracing advanced technology can and do provide tremendous benefits to workflow and processes that have real world benefits for injured workers and system participants in the workers’ comp industry.

Todd Davis

Todd Davis, Vice President of IT – ReviewStat Services for UniMed Direct, is responsible for the continued development the industry-leading ReviewStat system. Leading a team of like-minded professionals, Todd works to review and improve ReviewStat’s full-featured and robust system to make it even more efficient and easy to use.