Advertisment

AI – Seen through an economics lens

Professor Avi Goldfarb, University of Toronto talks about AI from an economics prism – and touches upon a number of areas like tax, value etc.

author-image
DQINDIA Online
New Update
Avi Goldfarb

Professor Avi Goldfarb, University of Toronto talks about AI from an economics prism – and touches upon a number of areas like tax, value, inflection point, utility etc.

Advertisment

He is the Rotman Chair in Artificial Intelligence and Healthcare and a professor of marketing at the Rotman School of Management, University of Toronto. Professor Avi Goldfarb is also the Chief Data Scientist at the Creative Destruction Lab, a faculty affiliate at the Vector Institute and the Schwartz-Reisman Institute for Technology and Society, and a Research Associate at the National Bureau of Economic Research. His bold and deep endeavours of research unravel the economic effects of information technology. He has come up with mind-stirring books like Prediction Machines: The Simple Economics of Artificial Intelligence (As a co-author) and Power and Prediction. In this interview he helps us understand the delicate threads that connect, and sometimes entangle, the two already-complex fabrics called Economics and AI.

The age of ‘thinking AI’ is still 20 to 50 years away. Ethics, bias and transparency are real issues. Ultimately the AI we use would be on the data collected and used for training AI. Ask yourself what’s the mission of your business and how prediction technology will allow you to do it better.

Tell us something about the inspiration and journey of your book ‘Power and Prediction’.

Advertisment

When I was a graduate student there was this new thing called Internet which had just begun to fascinate the world. The first 15 years went into understanding the concept, and when we started the ‘Creative Destruction Lab’ (which helped science-based start-ups to scale) we saw a company using AI for Drug Discovery. And this was in the first year itself. Now remember this—that idea was a staggering one ten years back, even if it pretty familiar now. In the next years though, there has been a flood of AI-based companies. So in 2015, I and the co-authors of the book decided to wrap our heads around AI. Specially on what it meant for business and economy. That led us to the first book. It’s about how economists would think of these technologies. AI getting cheaper – that’s not what it is about, the question is about prediction technology. It’s about components of prediction and how does human judgment fit in that equation. The book was a success and we thought AI would take off exponentially. 2018 happened. 2019 happened. But even in 2020 there were no signs of the massive AI explosion everyone had expected. That puzzled us – and, thus, came the next book.

So what’s happening?

A lot of companies had not derived value from AI investments. Prediction technology is a big deal on paper but the reality, on ground, is quite different. That led us to question whether we were wrong about technology being a big deal or if we were wrong about the timelines. We dug into economic literature and started to discover that big technology changes tend to take time. The metaphor for AI can be ‘electricity’. It was a big deal when it arrived in the US in the 18th century. But it took time before it became a mainstream force.

Advertisment

What took that long? How does that help us to understand AI’s adoption curve?

In order to use electricity, a lot of extra expenses had to be incurred; and to make those expenses worthwhile, electricity had to make a big impact on the bottom-lines. Just changing the steam engine did not help much (except for meager savings) unless the underlying workflow was also reconfigured. To get the data in order one needs computer systems and processes to fall in place – and all that expense and hassle will not be justified with meager savings.

You mentioned that the economics of everything is important. So if the price of ‘coffee’ goes up, that of milk or cream or sugar also climbs. If AI prediction is ‘coffee’, then what is ‘tea’ in this equation?

Advertisment

The aspect of workflow which is done by humans – that part of judgment is ‘tea’. And it differs from scenario to scenario. It would be very different in a transcription process and it would be very direct in a medical diagnosis process, for example. That said, if AI replaces some of these parts, it makes humans free for more value and more effective work. We have to think beyond point solutions and the incremental impact here.

What’s your take on AI tax?

The world has had a productivity slowdown. Productivity is everything, in the long run. Many experts have pointed that the problem is that of ‘not enough AI’. On an average, society wins with better technology but that’s not the impact on every individual. So to create that balance we can think of a ‘social safety net’ or something else. Better prediction can give value to your potential customers and change the way you operate a business. That is possible with transformative and a systems-level approach to AI.

Advertisment

Are CIOs and CEOs standing on an AI cliff? What’s your advice to them about AI investments?

The age of ‘thinking AI’ is still 20 to 50 years away. Ethics, bias and transparency are real issues. Ultimately the AI we use would be on the data collected and used for training AI. Ask yourself what’s the mission of your business and how prediction technology will allow you to do it better. Rather than using pointed-solutions, we should think of the beneficiaries as well as resistance-points of AI. Who gains power from AI? Who is scared? Who gets disoriented? Who runs it? Those are the questions to ask.

Avi Goldfarb

Advertisment

Professor, University of Toronto

By Pratima H

pratimah@cybermedia.co.in

Advertisment