by Evan Shellshear, Brendan Markey-Towler and Leonard Coote
An abridged copy of this article was published on The Conversation. Follow the link to read the shorter article ‘A simple calculation can stop artificial intelligence sending you broke‘. We present the full article here.
Mike’s story: AI in the outback
Few places on earth can break people like the Australian outback. Toughness is not enough to survive out there, let alone thrive. You must be smart, resourceful, and innovative to stand a chance.
In this part of the world, 2019 saw the culmination of a devastating nine-year El Nino. Things die during a regular El Nino. During this one, it was common to see livestock strewn along the road where they had perished for want of food and water. Then in the second half of the year, one of the worst bushfire seasons Australia has ever seen began and it made a difficult situation impossible.
Mike is a crop farmer, one of those tough-as-nails, wily, and innovative Australians who make their living in this environment. Reeling from this latest catastrophe and to keep ahead of the next disaster, he began exploring smart farming techniques enabled by artificial intelligence (AI). The technology he was pitched overlaid AI onto Big Data, and would have been integrated into his irrigation, pest control, and soil management systems to allow “precision farming.”
In theory, by refining the focus of his practices from the paddock level to the level of the individual plant, the technology could reduce wastage by up to 80 percent. That is a lot of extra (desperately needed) cash when you are trying to compete in a global market with little assistance from your government, against foreign farmers with lots of assistance from theirs. Add to that an environment where little will grow.
The salesperson’s pitch to Mike was compelling and showed a familiarity with the existential challenges faced by Australian farmers – supported by a clever application of AI generating potentially significant returns. But there was a problem: the business case the salesperson outlined raised doubt in Mike’s mind. The initial investment would be $500,000 with additional costs of $80,000 per annum for data storage and processing alone before maintenance and repair.
To put this in context, in 2020-21, average Australian farm cash income in US dollars, out of which the farmers pay their families’ income, was about $137,500 . Profit was $79,000, which translates to a 1.6 percent rate of return on the farm assets (and the financial year 2020-21 was a good year for Australian farmers). Within that, average expenditure on crop and pasture chemicals (herbicides, etc.) and fertiliser was $63,000. Including water rights takes that expenditure to $100,000.
Assuming that the technology performed at its absolute maximum (80 percent cost reduction), and that it did not add to any other costs then, for the average Australian farm, this technology would, at the very best, be profit neutral going forward and would put it significantly into deficit for the first year. In short: if Mike’s farm were roughly average, adopting AI may have bankrupted his family.
The cart before the horse: technology before business
We live in a new era of obsession with AI. The technology is at once enchanting and increasingly pervasive, and everyone has something to say about it, from Elon Musk to your Uber driver. Our opinion is that almost nobody is looking at the main problem that it presents for businesses. Almost all are caught up in the engineering of what AI can do, while many more ought to be caught up the economics of what AI is worth.
To illuminate the problem, let us go back to the foundational question: what is AI? Fundamentally, artificial intelligence is the pursuit of the motivating dream that lays at the dawn of computer science in the 1950s. John von Neumann and Alan Turing, two of the “fathers” of the discipline, both explicitly imagined building machines that could mimic the operations of an intelligent mind . Computers would become an AI insofar as the programs which “read”, operated on, and “wrote” data, represented in mechanical states and dynamics, could mimic the perception and processing of an intelligent mind.
Thus, for the next half-century and beyond, AI developed as a subdiscipline of computer science mostly dedicated to the original motivating dream of the discipline. Over time, further additions have extended this core: the growing integration between AI and robotics, the potential for linking advanced analytical systems using the internet (e.g., the Internet of Things), the use of AI to operationalise Big Data, the application to artificial intelligence to save lives, etc. However, the underlying drive and advance of the technology has been in the same direction: the development of algorithms (programs) that can mimic the operations of an intelligent mind, or better still, a super intelligent mind. Hence, the hype surrounding AI.
Humans may yet follow in the footsteps of gods and create a new form of intelligence to rival our own. At the least, we now have at our disposal technologies that offer the possibility of automating a broad range of human activities. We are not just talking about the automation of manual labour on the production line, for many of the major advances in AI particularly in the past decade have been in automating data processing and analysis . One of the most thrilling (and terrifying) advances has been the advent of the GPT-3 algorithm from Open AI, the closest we have come yet to general AI, which can write entire, cogent essays from a single question on virtually any topic.
The world has lived through eras of obsession with AI before with two of them ending in so-called “AI winters” – per Michael Woolridge’s history of the field, The Road to Conscious Machines . The first major winter began in the mid-1970s when the limitations of symbolic logic systems (basically, AI systems whose syntax was like the symbolic logic of mathematical proofs) were revealed, and funding for academic research quickly dried up. Essentially, it took far too much work to produce AI systems that could produce far too specific tasks, and the effort exploded exponentially the more you wanted to broaden them. The second major winter began in the late 1980s, when these troubles in academia caught up with industry, and corporations stopped investing in expensive AI decision aids based on expert systems, switching to cheap personal computers.
In the case of both winters, these advanced systems fell short of hype in what they could do, and what they could do was of limited economic value. It was not that AI was not useful, far from it. It was that the intuitive concept of AI, and the wild flights of imagination it encouraged, led to expectations front running the realised value propositions of the technology. The backlash over-corrected and set the scientific and economic advance of the technology significantly backward.
The current hype associated with AI is familiar to those with knowledge of the aforementioned history of the technology, and a warning sign that business, government, and public expectations about the technology may be running ahead of the reality. Scientific advances in neural networks and machine learning, vast improvements in computing power, and the advent of distributed computing have brought about a qualitative change in the capabilities of the technology. But modern AI is still not limitless, and it is still expensive, especially when machines must be trained to learn specific and novel task sets. If (when) disappointment with the attained value of modern AI is realised, we invite yet another AI winter.
The problem with AI is that the focus is presently on what it can do, not even with what it does do. At present the engineers are in the driver’s seat, and engineers’ strength is not always business outcomes because of their focus on possibility. Economics realises many things are possible, but also that resources are constrained and must be directed to the best possibilities. Paraphrasing the famous words of Lionel Robbins : economics is the study of life as a relationship between ends and scarce resources which have alternative uses. Economics needs to be put in the driver’s seat when it comes to the question of AI adoption for any organisation, and engineering put under the hood. The question must not be the engineering of what an AI can do but the economics of what an AI is worth.
The solution: put economics back in the driver’s seat
The problem with AI is that the technology is presently steering the conversation, and that instead we need to lead first with economics and not engineering when it comes to AI adoption in organisations. Let us unpack why, so that we can better pose a solution to the challenge of restoring economics as the primary decision-making framework for AI adoption .
The first thing is to firmly establish what, exactly, and as simply as possible, the economic criterion for AI adoption is. This is straightforward enough: an AI system ought to be adopted by an organisation if and only if the profit the organisation can obtain after adoption is greater than the opportunity cost of that profit. Now, the single easiest error to make in economic reasoning is to forget opportunity cost, focus only on whether profit after adoption is positive, and commit to a suboptimal, possibly even bankrupting decision . That is why it is so important to always remember opportunity cost: the value of the next best alternative. Typically, this is the profit achieved under the status quo, but it can also be the profit achieved by an alternative strategy (e.g., expanding payroll).
Consider the logic of Figure 1. Economists will recognise this formula is a particular case of a consumer (buyer) surplus maximising decision, where a rational consumer looks at their set of decisions and chooses the one which maximises their consumer surplus. The surplus concept is useful as it also helps explain why the situation faced by Mike arises. When pricing their AI products and services, producers analyse their potential customer’s gains and then choose a price point to maximise their producer surplus.
Figure 1: The basic economic principle of AI adoption – of which the right-hand side is extremely easy to forget.
Where this thinking can go wrong, as with Mike, is that producers fail to factor in the total cost of ownership, which includes all the other costs which arise with such a system such as maintenance, repair, installation, failures, lower than expected results, etc. Mike needs to consider all these factors, as well as the chance that the system could fail altogether, if he is to make an optimal decision. The above formula is valuable as a “cue” for the economic mindset, a habitual thought to always call to mind when considering AI adoption. Alone, however, it is not enough guidance for decision makers; it is too abstract. We need to be more specific.
The basic economic principle of AI adoption can be restated in what economists famously call “marginal” terms. What will be the change in profit obtained by AI adoption? Adapting this, we can unpack the basic economic principle of AI adoption. It now becomes something more specific: an AI should be adopted if and only if, relative to the next best alternative, the marginal benefit of its adoption is greater than its marginal cost. The next best alternative (the opportunity cost) will, again, typically be the status quo, but it could also be hiring another employee or outsourcing some tasks.
To make this as useful as possible, let us be still more specific. AI adoption can generate gains by improving the quality of our judgement (see Footnote 6). This improvement of our judgment leads to better quality decisions (typically by better quality predictions and prescriptions ). From the firm’s perspective this means:
- We have better allocation/utilisation of the firm’s inputs
- We have better quality/delivery of the firm’s outputs
In the first case this leads to lower costs, in the second case this would lead to greater revenue. In other cases where we improve the quality/delivery of the firm’s outputs it can simply lead to a better delivery of services, for example, in a hospital scenario this leads to more lives saved, hence an increase in the value of statistical lives saved. In addition, these outcomes may be generated directly or indirectly. In the latter case, the integration of AI may generate greater returns on existing assets by creating synergies that boost their productivity. For example, in Mike’s case, the AI offered a cost saving. By integrating AI into his irrigation, pest control, and soil management systems, he would have reduced wastage and enhanced the productivity of his existing assets, potentially reducing his required capital expenditures in the future.
On the other hand, the marginal cost of AI adoption consists of at least three main components:
- The up-front cost of installation and setup
- The ongoing operating cost of the AI system, and
- The ongoing cost of maintenance and repair of the AI system.
When these components are converted into expected net present value, we have the basis for an economically informed decision as the following diagram illustrates (see Figure 2). To make the diagram more applicable, instead of the general inputs and outputs perspective mentioned above, we focus on specific value drivers to make the framework clearer.
These are the economic principles of AI adoption, and they provide simple available cues for building and triggering a habitual economic mindset when thinking about AI adoption in organisations. To complete the system and expand these cues for a habitual mindset into the basis for a habit of behaviour, let us set down a simple decision tree for AI adoption (see Figure 3).
The decision tree consists of three simple questions, two are the responsibility of an AI salesperson to answer, one is a question that can be posed internally to the organisation. The first question is for the salesperson: what is the dollar or percentage gain that your AI generates? If the salesperson cannot answer in terms of revenue, cost reduction, or value of a statistical life, a conservative rule of thumb is to not adopt the AI. If the salesperson gives a sufficient answer, the second question may be posed to the salesperson: what are the dollar costs of installation/setup, operation, maintenance, and repair? Again, if the salesperson cannot answer, a conservative rule of thumb is to not adopt the AI. If the salesperson gives a sufficient answer, however, we proceed to the third question: given the next best alternative to this AI, are the marginal benefits of adoption (in expected net present value terms) greater than the marginal costs? If no, the alternative ought to be pursued, if yes, the AI ought to be adopted.
This might seem like common sense and relatively straightforward economic thinking, but as the saying goes the funny thing about common sense is that it isn’t that common. Notice how, when we put the economic process for making decisions about AI adoption in a decision tree, any given AI needs to meet a high bar to be adopted. Three out of the four endpoints conclude in rejection. Keeping these three elements of an economic attitude in mind, practicing them regularly and habituating them, are important for getting economics back into the driver’s seat when it comes to organisational AI adoption, and putting engineering under the hood. The problem is one of hype and expectations getting ahead of the reality of AI’s value proposition. The solution is to build good habits with simple heuristics that send us into the mindset of economics whenever assessing it.
Resolving Mike’s AI investment challenge, and others
Applying this simple heuristic to Mike’s situation, we can readily understand why he couldn’t make business sense of the salesperson’s pitch of AI-enabled precision farming.
As we discussed in the introduction, the salesperson suggested that Mike would achieve cost savings of 80 percent; the salesperson got past the first decision point. We saw that if Mike’s farm was roughly average, the relevant expenditures would have sat somewhere around $100,000. The dollar value of Mike’s savings would have been around $80,000 in the best-case scenario. The salesperson was also upfront about the dollar cost of the systems: $500,000 for installation $80,000 per annum in ongoing costs; thus, the salesperson got past the second decision point.
However, we can immediately see why the salesperson failed on the third decision point: the best-case marginal benefit of adopting the AI ($80,000) was less than the marginal cost ($80,000 plus the installation cost). Mike would have made less profit than his opportunity cost (e.g. doing nothing) if he adopted the AI and would have eroded the meagre 1.6 percent rate of return on his assets he was accruing. He may have even bankrupted his family by incurring a significant debt to purchase a profit-neutral technology. The technology (engineering) may have been amazing, but the economics was not.
Mike’s example is based on a real life situation experienced by the authors, and one but many of the real examples that we encounter every day in practice. Because it is based on economic reasoning, our heuristic applies equally to these other cases. Let us look at an example from medicine.
In May 2019, the Food and Drug Administration made a decision that created headlines around the world by approving the world’s most expensive drug treatment to date, Zolgensma. This medicine treats spinal muscular atrophy in infants replacing annual lifelong treatments with a once off cure. The minimum price is (only!) $2 million (USD) for a single treatment. To many this price point makes no sense, however, when we apply our economic heuristic, we can better understand why Novartis chose this fee.
Zolgensma is part of a new wave of drugs that promise to usher in a revolutionary era of personalised medicine . This form of medicine uses AI to leverage Big Data and discover treatments bespoke to individual genetic profiles. Zolgensma works by replacing the defective SMN1 gene that expresses itself in infant spinal muscular atrophy with a normal copy. To discover this technology for bespoke genetic medicine, Novartis had to mine terabytes of genomic data to find the right compound which, when delivered, would introduce a highly specific change to a highly specific point in highly specific individual genomes.
Does this AI-enabled technology make economic sense? Let us apply our heuristic. In this case, the direct benefit of the technology is to save (quality-adjusted) statistical lives by improving the quality of life for infants debilitated by spinal muscular atrophy. The value of a statistical life used by governments and corporations across the world in daily policymaking is typically between $4 million and $10 million (USD). Novartis’ AI-enabled drug costs around $2 million (USD). Given we are talking about infants with an expected life of up to 80 years, there are a wide range of statistical lives that could be saved by the drug that would justify adoption. The marginal benefit (statistical lives saved) is greater than the marginal cost of adoption, the profit greater than the opportunity cost of doing nothing or even other drugs. This calculus may sound hard-hearted, until we remember that opportunity cost may also very well be the value of allocating funds to research infant oncology.
Putting economics back in the driver’s seat and engineering under the hood allows us to resolve investment decision as specific as whether a given farmer should adopt AI and as general as deciding among which biomedical priorities to allocate scarce research funds. Using an economic heuristic of worth, rather than an engineering heuristic of possibility, guides us to make better decisions that not only mitigate against the chance of a new AI winter, but also promote a more prosperous and healthier world.
There is no denying that AI is a powerful technology with the potential to not only automate but supercharge many things, from menial labour to biomedical data analytics. It at least therefore offers a vast expansion to human capability. There is, however, a risk involved in the understandable hype generated by AI. The risk is the expectations of researchers and industry can get way ahead of the reality of the technology. This invites the potential for yet another AI winter which delays the development and implementation of this extraordinary technology. Our argument has been that this problem can be traced back to an old problem in technology adoption whereby the engineering mindset of possibility dominates the economic mindset of value. In AI as with so many technologies, economics must be put in the driver’s seat and engineering under the hood to avoid expectations getting ahead of reality and the advent of disillusionment.
We proposed a simple heuristic to habituate the economic mindset when assessing AI-enabled technologies. Businesses, governments, individuals; all can profit from adopting the three simple questions we propose for arbitrating whether the value of an AI technology exceeds opportunity cost:
- What is the dollar value or percentage gains created by the technology?
- What is the dollar value of setup and ongoing costs?
- Relative to the next best alternative, are the marginal benefits of adoption greater than marginal costs?
In short: don’t ask what AI can do. Ask what it is worth.
 Ashton, D, Martin, P, Frilay, J, Litchfield, F, Weragoda, A & Coelli R, 2021, Farm performance: broadacre and dairy farms, 2018–19 to 2020–21, ABARES research report, Canberra, March, DOI: https://doi.org/10.25814/ycy6-3p65. CC BY 4.0.
 von Neumann, John (1958). The Computer and the Brain. New Haven: Yale University Press; Turing, Alan (1950). Computing Machinery and Intelligence. Mind. 59(236):433-460.
 Sullivan, Joshua and Zuvatern, Angela (2017). The Mathematical Corporation. New York: Public Affairs.
 Wooldridge, Michael (2020). The Road to Conscious Machines. London: Penguin.
 Robbins, Lionel (1932). Essay on the Nature and Significance of Economic Science. London: MacMillan.
 Here we build on the work of Joshua Gans who has pioneered the economic analysis of AI systems with a series of papers, and a summarising book: Agarwal, Ajay, Gans, Joshua and Goldfarb, Avi (2018). Prediction Machines. Cambridge, Massachusetts: Harvard Business Review.
 This confusion is often invited by economists using “profit” as shorthand for “economic profit”. Economic profit is “accounting” (i.e. standard) profit minus opportunity cost.