Is Chat GPT a confidence trick?
Confidence tricks
Confidence tricksters work by getting us to trust them, then getting us to do things that benefit them, but which we would not otherwise do. Usually, the trick depends on creating an illusion. If you get an unexpected call that claims to be from your bank, there is a chance that you actually speaking to someone else, and they are not really trying to help you. They may want to get your login details and use them to empty your account. That’s why banks promise that they will never ask for your account numbers over the phone or in an email. Tricks like this go back to ancient times: all that the internet has done is made some of them easier to do and harder to detect. To be fair, the internet has probably made other scams harder: it is no longer as easy as it once was to turn up in some remote place and claim to be a medical doctor. Then again, on the internet nobody knows if you are a dog. Most people, most of the time, have meaningful interactions with people whom they have never met and never expect to meet. The possibilities for deception are endless, and some of them enable confidence tricks.
In a modern setting, the victim of a confidence trick (the mark) need not be an individual. It could be a company that is induced to take actions that are not necessarily in its interest. The dynamics are similar, because companies are made of people, and the corporate environment does not prevent them from falling for modernized versions of the con artists old tricks.
I believe that there are parallels between what Open AI is doing with its new product called Chat GPT and what con artists have always done. I do not necessarily believe that Open AI is attempting to con us, but I do think it is tapping into the same human weaknesses that con artists exploit. Essentially, it is relying on our credulity, optimism and natural social instincts, and hoping to benefit.
Chat GPT
Chat GPT is a so-called generative ai system that has created a lot of excitement. It can produce written text that is a very good match, in terms of style, for what would be produced by an expert. You name the field, if there is written text about it, Chat GPT can produce text whose style closely resembles what an expert might produce. Either Chat GPT is actually a superhumanly smart polymath or it is a counter-example to the common assumption that if you write like an expert you must be an expert. I tend to believe the second alternative. That’s partly because its performance on things that I know about is not reliably good. If I am right, we are certainly dealing with an illusion. Its creators, OpenAI, are straightforward in admitting that they do intend to replace the current research prototype with a paid-for service, and their revenue projections, if realized, will certainly generate a lot of money for the company,and warrant the large investment of effort and computer time that has been spent.
Perhaps this is all above board. Open AI would not be the first company to strongly believe in the potential of an untested product.
Or perhaps Open AI is fooling itself. The company is a non-profit with lofty goals related to AI safety.
OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity. Mission statement
This mission makes much more sense if AGI is a realistic and imminent prospect than if it isn’t. The mission is of a piece with the philosophies of longtermism and effective altruism, which mix concerns about the future of human society with dogmatic views about how that society will and/or should evolve. OpenAI board members clearly do hold such views. Speaking as a long-term skeptic about Utopian schemes, I find longtermism alarming, because it is naive about the consequences of concentrating power in the hands of supposedly well-meaning philanthropists. Most people who acquire that kind of power run the risk of abusing it, either intentionally or unintentionally. To be fair, the mission does mention benefit to all: I would still be happier if it were more clearly supportive of the idea of decentralized and democratic decision making. In any event, if this line of thinking is correct, we are freed from the need to ascribe bad motives to Open AI.
Or perhaps Open AI knows exactly what is doing, and Chat GPT is part of a ruthless commercial strategy for dominating a future market. If this is right, maybe the parallel to a confidence trick is close and we, along with every other product developer who might rely on Open AI’s services are the marks. In my view Chat GPT is very impressive, but there are as yet no proven use cases aside from a few slightly direputable ones that we will get to in a moment, and one other that will take a while to mature. Let’s look at this more closely.
The anatomy of a confidence trick (adapted from David Maurer)
David Maurer, an academic writing in the 1950s. outlined the stages of a con. Imagine a gang of swindlers preying on a victim during and after a train journey from a small town to a bigger city. The picaresque language is dated, as is the setting, but the concepts haven’t changed in 70 years.
The con artists’ first step to find a mark who it is worth swindling, but who maybe lacks the network and resources to avoid the con. A very rich mark would probably have access to lawyers and financial advisors, and between them they would smell a rat. At the end of this stage, one of the swindlers has met a potential mark and begun to establish a friendly relationship. The swindler is not the central player in the con, but an assistant.
When the mark gets off the train, the assistant offers to show him around town (in this olde worlde setting the mark is male, for sure). There will be talk of business opprtunities, but this is just setting the stage. The point is to establish trust. Then the assistant introduces the leader of the group, whose role is to represent to the mark the possibility of extreme and magical success, by means of a scheme that is mysterious and perhaps a bit shady. The next step is for the leader to show the mark that the scheme works, by making a bit of money for him. Maurer calls this stage the convincer. If the scheme is a complete sham, the money made for the mark is an illusion. If on the other hand the scheme has an element of reality, perhaps it does bring in some money, but isn’t as scalable as the mark thinks. Either way, the mark’s newly acquired money is immediately taken back into the scheme and reinvested. The mark still hasn’t really invested any of his own money, but feels like he has, because he feels ownwership of the money that was made and then reinvested. At this point it is helpful if the con artists can also make the mark feel privileged to be part of their exciting big city world. You can imagine how a company using the latest AI might play this game in order to draw in partner companies with more solid but less sexy products.
The context of the train ride and the big city is important, because the mark only has a short time to spend in the city. Unless he moves fast, he might miss the opportunity. In the modern day versions of this kind of con, urgency may be created by some other means, but it is once again present.
The swindlers now want the mark to solidify his mental commitment. This might be done via a piece of theatre in which confederates of the main swindler apparently commit to the scheme. This can also acclimatize the mark to the amounts of money and commitment that are involved. The modern equivalent of this might be a flood of compelling publicity suggesting that someone somewhere is hugely benefitting from the new and exciting scheme. Things move fast these days, so companies who want to be in on the act can’t afford to be cautious, they have to make their move now. Where have we heard this kind of talk before?
In the old-fashioned setting, the commitment is financial, and the assets in question are dollars and cents sitting in a hometown bank. The modern version of this may involve a broader conception of what kind of asset is committed. For example, a modern company playing the role of a mark may have engineering and product developments that it has choices about how to direct. The modern equivalent of the swindlers may wish to influence the direction of these resources, moving them in a direction that the mark might not otherwise choose.
In the penultimate stage, the mark invests his own substantial assets in the scheme, which may appear to succeed, but is in fact a sham. The script calls for a final hitch that prevents the expected fabulous profits from coming to the mark. In the modern setting, the company playing the role of the mark commits resources in the hope of developing a product that works well with the semi-magical resources provided by it potential partners. To be fair, the modern version is not necessarily a sham. It could instead be a real business possibility, but one that is less immediate than the marks have been led to believe. Thus, in the modern version, there are two potential payoffs. Either the scheme pays off, and its inventors get a cut, or the prospect of success proves illusory, and the inventors benefit by directing and consuming the efforts and resources of the marks.
In the final stage, the goal is to let the mark down gently. Maurer calls this final stage “blowing him off”. I don’t know why. If it works well, the mark doesn’t even realize that there was a swindle, and thinks that everything that happened is just bad luck. The con artists benefit from this, because the mark has no reason to pursue them for retribution, but sees them as fellow victims. In the modern setting, the goal is to make the marks feel that the decisions that they made were ordinary business decisions, and to avoid bad blood from the feeling of being manipulated.
What we learned
Maurer’s description of the long con includes elements that will be absent or not needed in the present day. The key features are that the mark is set up to participate, willingly and enthuiastically, in a scheme that turns out to be not at all in his interest, and that the swindlers benefit directly from the mark’s decision to invest. We may be able to spot analogues of the stages given above, in which case it is reasonable to suppose that we are dealing with some kind of con.
From the perspective of the con artists, the challenge is to delicately lead the mark through a series of psychological states that predispose him to make the decisions that the script requires. The details differ, but the principles are eternal.
The con works, if it works, because it induces the mark to fall prey to a panoply of human weaknesses. These include:
- Optimism. If the mark didn’t believe that the scheme had a chance of success, he would pull out before committing his own resources.
- Sociability. The swindlers have carefully created a social world for the mark. We naturally want to please and fit in with the people we are with.
- Determination and commitment. Depending on how well “blowing him off” was executed, the mark may have an inkling that something is wrong. But since determination is a virtue, he may stick with the plan despite his doubts. The so-called “sunk cost fallacy” may hold him to this: he has expended a fair amount of effort and money so far, and giving up now would feel like an admission of having made a mistake.
- False feelings of competence. The con is set up so that the mark feels as if he is in charge, understands the situation and is driving the process. None of these things are true. In fact the mark is a mark precisely because he doesn’t recognize that the situation he is in requires skills that he doesn’t actually have in sufficient measure.
- Wishful thinking. Everyone is attracted by the idea of a new life in a new and better world, so is vulnerable to promises that the scheme will deliver such a thing. In the old-fashioned world, this was the combination of the reality of an exciting big city and the prospect of astounding future success in that millieu. This kind of promise works especially well if the world in general seems to be in rapid transition, which, in the modern world, is certainly an impression promoted by 24-hour access to real-time internet news.
Back to Chat GPT?
OK, what about Chat GPT? Placing myself as a potential mark, with an existing business, I find myself half way through Maurer’a process. I am tempted to believe that Open AI is capable of somehow realizing fantastic profits, but it is mysterious how this is going to happen. Since I want AI to succeed, preferably in a way that is different from the advertising-dependent successes of the past decades, I have begun to explore ways in which my business could use this exciting new thing. Open AI hasn’t asked me for anything yet. If I am gullible, I might think that Open AI has delivered the convincer. I am starting to think that my business can’t not invest in a deeper collaboration with Open AI. If the whole thing is a scam, I am on the edge of a tight spot.
Now, I’m not saying it is a scam, but if it were, that is where we would be. Open AI has no business idea or product that will make money by helping. There are some obvious, non-fantasy business cases for Chat GPT, such as the provision of plausible but meaning-free text on more or less any topic. Search engine optimizers will jump at the chance to fill their sites with text that is difficult to distinguish from genuinely valuable content. Lazy students will definitely find it easier to generate plausible responses for assignments that they don’t want to do, or about material that they can’t be bothered to learn. Chat GPT is not a patch on David Foster Wallace or Cormac McCarthy, but it can generate text that is agreeably weird. That’s not enough on its own, but I am pretty sure that someone with interesting things to say will be able to use it to write a good novel. If these are the use cases that Open AI is after, I would say ‘Have at it’ , but that’s not going to generate fabulous profits, and in any case any success would be pretty transient, especially in the first two applications, which are arms races between fakers and detectors of fakery. No cons involved there, I think.
Where there might be a con is if Chat GPT suckers the providers of educational materials, or or writing help, or whatever, into a dependence on their tools. Then we might be further along in the scheme, we might have been drawn in and feel a strong commitment, we might be contributing subject expertise, engagement and product design. Also, in this scenario, if the products, contrary to my personal expectations, actually work, we finish with a long-term dependency on Open AI, not a graceful let down and an acceptance of sunk resources. If the products don’t work, the simple parallels to Maurer apply and we retreat, perhaps chastened, to the internet equivalents of our small towns.