My list of available pastimes is severely limited, owing largely to my newfound responsibilities for the lives of two tiny humans. Gone are the days of weekends rock climbing and bivvying on Dartmoor; Friday night’s challenge these days is Get The Shrieking Baby To Sleep. Fortunately, since November last year I now have ‘Chatting with ChatGPT’, a pastime which has awoken the public awareness to AI very much like a 9-month-old screaming at 3am. It’s parody of Donald Trump explaining bitcoin, or parable about removing a peanut-butter sandwich from a VCR written in the style of the King James’ Bible, has certainly been amusing, but more so mind-blowing.
The hype around ChatGPT, and AI more generally, has been equally impressive. We are apparently facing a future of both workless leisure and the end of humanity at the same time – a paradox on which ChatGPT can, on demand, write a very snappy limerick. And there is nothing an investment bank loves more than hype – the more greed and/or fear inducing, the better. There are big trading bucks to be made from lists of stocks to buy or sell based on whether AI can be spun as opportunity or threat. Step forward NVIDIA, maker of computer chips that power AI, which after trebling this year has joined the very elite club of $1 trillion US companies.
For the other side of the trade, we’ve seen market giants humbled by this chatter. Three AIM shares, Keywords Studios, RWS Group, and Zoo Digital share the common theme of providing outsourced “work for hire” services to companies, such as translation services. All are down by over a third in the last 4 months due to concern that these are redundant businesses in a world where AI can translate, code, and win photography prizes for free.
There is, though, a note of caution in the small print. It doesn’t take much exploration with ChatGPT to realise that – like my 3-year-old – it makes stuff up. Last month, for example, two US lawyers were fined for submitting fake court citations generated by ChatGPT. It transpires that ChatGPT wrote a very plausible sounding legal brief, with the only flaw that six of the cases it referred to were entirely fictitious. Alas we cannot just dismiss this as a bug in a new technology that will improve over time: this is an unavoidable feature of a technology built on seeking correlation and pattern in very large data sets. (Aside: as well as toying with ChatGPT, readers will enjoy the Spurious Correlations website – where we learn, for example, that the US per capita consumption of mozzarella cheese between 2000 and 2009 has a 95.86% correlation with civil engineering doctorates awarded.)
Our working assumption, therefore, is that artificial intelligence as we currently know it will still need human intelligence overseeing and checking its inputs and outputs. ChatGPT is very good at producing convincing waffle – but one wouldn’t want to rely on its translation of medicine labels unchecked. At best, it is a mega productivity-enhancing tool. At worst, it will create a self-reinforcing feedback loop of falsehoods with the silver-lining that by primary school, my children will be eating mozzarella balls for lunch because ChatGPT says that will produce more civil engineers.
It’s also unclear at this point the role of regulation. Thorny issues of copywrite infringement sit right at the heart of the current Hollywood strikes: whether or not AI will be confined only to training on open source data has huge implications for the entertainment and indeed every other creative industry.
That all shouldn’t play down quite how transformation AI productivity gains might be – and already AI’s role in pharmaceutical research is hugely exciting. In our AIM Portfolio Service, we invest in Instem PLC. Instem’s In Silico software accesses over 500,000 toxicology studies on more than 200,000 chemicals to identify patterns and trends to make predictions on new compounds. Whereas before this work would have required expensive and laborious laboratory time, AI-powered computational toxicology is both accurate and far faster.
It is impossible for us to prove either way whether AI poses an existential threat to a given company, or indeed to humanity itself. And because of that known unknown, investors will likely remain fretful about the stocks deemed by the investment bank gurus to be negatively exposed. We can though be fairly confident that the AI Era will be hugely disruptive, and a company’s success will ultimately depend on its willingness and ability to adapt. Yet in parallel to the threats, leaps in technology foster opportunity – and some of those are already profitable, growing, and investable.
And no, ChatGPT did not write this article for me. It did, however, suggest the title.
Ian Woolley – Head of AIM Services
To hear more from Ian at our AIM webinar on 15th November, please click here.
Hawksmoor Investment Management Limited is authorised and regulated by the Financial Conduct Authority. Registered in England No. 6307442. Registered office: 2nd Floor, Stratus House, Emperors Way, Exeter Business Park EX1 3QS.
This document should not be interpreted as investment advice for which you should consult your financial adviser. The information and opinions it contains have been compiled or arrived at from sources believed to be reliable at the time and are given in good faith, but no representation is made as to their accuracy, completeness or correctness. Any opinion expressed, whether in general or both on the performance of individual securities and in a wider economic context, represents the views of Hawksmoor at the time of preparation, they may be subject to change. The value of an investment can fall as well as rise and you may not get back the amount originally invested. Investments in AIM carry an above-average level of risk – please see Guide to the AIM Portfolio Service for full details, which can be found on our website www.hawksmoorim.co.uk. FPC1225.