An 'across-the-spectrum rethink' needed as firms eye use of AI in procurement
The disparity between the relentless hype and its present limitations has left most people sceptical ...
MAERSK: BULLISH CALLXPO: HEDGE FUNDS ENGINEF: CHOPPING BOARDWTC: NEW RECORDZIM: BALANCE SHEET IN CHECKZIM: SURGING TGT: INVENTORY WATCHTGT: BIG EARNINGS MISSWMT: GENERAL MERCHANDISEWMT: AUTOMATIONWMT: MARGINS AND INVENTORYWMT: ECOMM LOSSESWMT: ECOMM BOOMWMT: RESILIENCEWMT: INVENTORY WATCHDSV: GREEN LIGHT AMZN: TOP PICK
MAERSK: BULLISH CALLXPO: HEDGE FUNDS ENGINEF: CHOPPING BOARDWTC: NEW RECORDZIM: BALANCE SHEET IN CHECKZIM: SURGING TGT: INVENTORY WATCHTGT: BIG EARNINGS MISSWMT: GENERAL MERCHANDISEWMT: AUTOMATIONWMT: MARGINS AND INVENTORYWMT: ECOMM LOSSESWMT: ECOMM BOOMWMT: RESILIENCEWMT: INVENTORY WATCHDSV: GREEN LIGHT AMZN: TOP PICK
The third in our series on AI in logistics examines the need for the human touch as well as technology
The late 1990s was a hopeful time for the internet. There were few websites, and they were difficult to make, and with dial-up, even harder to visit. Initially, having a company web address was thought to be a side-project for nerdy hobbyists. But gradually, some investors began putting money into it. Perhaps it would all come to nothing, but to be on the safe side, it made sense to keep an eye on it. To place a couple of side-bets on the sort of people who looked like they knew what it was all about: serious people, who had serious plans, probably.
They did not. Instead of a marque of authenticity, having a nice looking ‘dot-com’ URL was the first and only step in many of these business plans. As we have since discovered, being on the internet is fairly straightforward compared to building and managing a business that does something. When it transpired most people wanted to meet a dog before they bought one, Pets.com collapsed.
Attempted digital currency site Flooz.com took $50m of investor money with it when it went out of business in 2001, after crime syndicates began using it for money laundering. Luckily, nobody has had such a silly idea again. (Deliciously, Flooz is now a cryptocurrency exchange)
Never again
Before the bursting of the dot-com bubble and the subsequent bankruptcies, companies which changed their branding to include ‘dot-com’ had been able to increase their share by price by an average of $4.20.
In January, Intel, a company which makes processors for personal computers, virtually unusable by AI, mentioned AI 38 times on an earnings call. “It only helps investors realise just how far removed [Intel] is from anything related to AI,” determined a SeekingAlpha report. A few months later, the US Securities and Exchange Commission (SEC) fined two companies for so-called ‘AI-washing’. The two investment advisors, Global Predictions and Delphia, both claimed they were leveraging AI to make far-seeing predictions. The SEC ruled they were not.
A recent report from Goldman Sachs entitled Gen AI: too much spend, too little benefit?, showed that smarter people than your correspondent are taking note of this as well. Jim Covello, Goldman Sachs’ head of global equity research, pointed out actual disruption occurred when a cheap business model unseated a higher-cost one.
“…AI technology is exceptionally expensive, and to justify those costs, the technology must be able to solve complex problems, which it isn’t designed to do,” he said.
“Currently, AI has shown the most promise in making existing processes – like coding – more efficient,” Mr Covello added, “although estimates of even these efficiency improvements have declined, and the cost of utilising the technology to solve tasks is much higher than existing methods.”
A big problem for AI-enabled firms is that AI does not really exist. In ‘generative AI’, the ‘generative’ is to ‘AI’ what ‘horse-drawn’ is to ‘car’, a distinction, unfortunately, which is more than just semantics. Modern large language models (LLMs) can give a convincing impression of thinking, and even improvising (this is by design; they are, after all, engineered to impress investors). But in reality, as computer scientist Dr Michael Woolridge explained in a recent article, the chief breakthrough of the past several years, and the one that has catapulted OpenAI into stock market superstardom, lies in training data – 575 gigabytes of it, in fact.
“Where did they get all this text? Well, for starters, they downloaded the World Wide Web. All of it,” he writes. “Every link in every web page was followed, the text extracted, and then the process repeated… us[ing] very expensive supercomputers containing thousands of specialised AI processors, running for months on end.”
Check out this podcast clip of Greg Kefer from Raft explain how AI could transform logistics
Quality, not quantity
As undeniably ambitious as this mass data-recycling is, something important is omitted in the process: critical thinking. “Yes,” proclaimed Google’s AI to an AP reporter in May, “astronauts have met cats on the moon, played with them, and provided care.
“Neil Armstrong said, ‘one small step for man’ because it was a cat’s step,” added the $2.02trn search engine, helpfully. “Buzz Aldrin also deployed cats on the Apollo 11 mission.”
This is known in the business as an AI ‘hallucination,’ wherein AI trained on bad data uncritically reproduces garbage. An authoritative tone can go a long way on the internet, and humans have long been able to fool each other with misinformation; after all, some believe Armstrong, Aldrin and Collins never left Earth. If we need help to judge the veracity of a piece of dubious information – something even veteran social media users require from time to time – we will not get it from generative AI.
What will happen, then, when new generative AI models are trained on misinformation published by previously misled generative AI? There is now growing concern that subsequent iterations of generative AI, applied without addressing this complex criticial thinking hurdle, will not only not help; but will lead to a kind of unmaking of existing human knowledge.
In situations where lives are at stake, this is decidedly less funny. At the SMM trade fair, a Japanese ship’s captain, who did not want to be named, told me that, if he followed some of the routing suggestions made to him by AI-enabled weather routing companies, he and his crew would die. This bears out input from Weathernews last year that some AI routing suggestions made without a ‘human in the loop’ had been “insane”.
Quizzed on this a fortnight later, NAPA EVP, shipping solutions, Pekka Pakkanen confirmed to The Loadstar that in his company’s AI-enabled voyage optimisation software, a ship’s captain is the last and only line of defence.
“Traditional weather routing companies which have existed for decades have an application they use to optimise, and a human doing manual checks. That’s prone to human error,” he explained. “We are a software company; we don’t say you have to take the route, but it’s the optimal that the routing algorithm returns -– it’s the responsibility of the master. It’s more advanced.”
Check out this podcast clip of Greg Kefer from Raft, explain the future of AI in logistics
The human touch
This is not to say that no one is exercising power of judgement, however. In many cases, ‘AI’ serves as a handy smokescreen for old-fashioned labour arbitrage, as it turned out to be in in the case of Amazon Go, the firm’s AI-enabled physical store locations, which, it was discovered, were powered largely by remote workers in India watching CCTV footage. Meanwhile, Facebook’s AI-enabled content moderation relies inordinately on warehouses full of low-paid workers scrolling through a wall of Nazi propaganda, dead bodies and child pornography.
Unless OpenAI can successfully write an algorithm to verify truth, something whose fundamental assumptions are indefinable even for humans, ever-greater processing power dedicated to data-scraping will not lead to a higher understanding on-tap, as generative AI’s proponents promise, but to a snowballing of unusable garbage.
When dealing with an AI-enabled firm, ensure that there is a “human in the loop” of any decision-making upon which lives and livelihoods depend. When cost or profit projections are made based on the presumed abilities of generative AI, ask them to show their working.
Be reassured; this article is not a scare-vertisement about how new super-powerful AI will bring about Terminator, or take away your job if you do not invest now. (If it is ever successfully invented, perhaps it might). Be wary, instead, of giving AI responsibility over business decisions it is not equipped to make.
Do not fear technology; but beware the eternal gullibility of humans, who, in time-honoured tradition, will be fooled by serious-seeming people with serious-seeming plans.
You can hear the whole AI in logistics podcast here.
Comment on this article