Who?

🔗 What Chafes My Groin #2

text

First published: .

Customer Service Is Dying

The customer service industry has been slowly dying for the past decade or two. It used to be that businesses employed real live humans whose job was to help you with their products, whether right after purchase, when/if the products misbehaved, etc. The bigger the company (say, telecommunication companies with millions of customers), or the more sensitive the product (say, a car the can literally kill you and others), the larger the support teams these companies had had to employ.

But then came the really big companies, the Googles and the Facebooks, and showed the world that the more customers you have, the less customer service you need to provide. In some cases, no customer service whatsoever. It doesn't matter that a customer has a bad experience, we have 500 million more customers to abuse.

The progress made in the fields of voice and text recognition in the past decade has allowed such companies to provide AI-based "support" in place of real human support. It's no longer an automated answering machine that directs your phone call to the appropriate service representative, it's machines all the way. And these machines/AIs/chatbots/whatever are preprogrammed to only help or solve the most basic, banal issues or concerns you may have.

These giant tech companies have automated any aspect of their businesses they could automate with the least amount of effort. For example, these companies do not really have human based moderation/security teams that make sure their customers are "good citizens/netizens". Instead, they have black box algorithms that flag your account based on who knows what, automatically shutting down and denying you the services you had paid for. And when that happens, good luck finding a human with an actual brain who can understand why that black box erred and get your account reinstated. In our love of schadenfreude, we have trained ourselves to believe that such cases are justifiable, that "only bad people doing something wrong" get their accounts shut off like that, so screw 'em. That's completely false, like many many cases prove, for example this recent one.

These companies have built themselves in such a way that the products they sell you are not truly owned by you—unless for liability purposes—and can be taken away from you at any point in time with absolutely no recourse, keeping the money you paid. In other words, Google isn't a tech company, it's an airline. You pay them, and maybe they'll give you the service you paid for, maybe they won't, it's entirely up to them. If they don't, it's justifiable because you're probably a terrorist anyway. Amazon can take away your Kindle books whenever they want. Tesla can shut down your vehicle whenever they want. Google can disable your email account. Peloton can brick your exercise bike.

You may think that the solution is easy: just don't be their customer. Great, but the real problem is that these mega successful companies are setting an example to all others. When a new VC-funded startup company is founded with three people, these three people do not emulate Larry Page, Sergey Brin and Scott Hassan's Google of 1998, when they were three guys in a dorm room building a better search engine and slowly growing it into a viable business. They emulate the Google of the 2020s: aggressive, disingenuous, sometimes downright hostile. Soon, I'll have no choice because there won't be any honest businesses left.

The explosion of LLMs, spurred by OpenAI's ChatGPT, is much more likely to accelerate the death of the customer service industry, rather than, say, get all software developers fired. I believe that soon it'll be rare to get actual service from a human. Governments will probably adopt AI-based service solutions too, many of them are already hiding important public services behind iOS apps with questionable permissions.

AI Is Eating Itself

The funny thing about ChatGPT, by the way, is that it appears to solve a problem the was created by AI (and Google) in the first place. Google's so-called "algorithm" has single handedly turned the Internet from a medium of knowledge to a big pile of bullcrap. You may not be aware, but long before ChatGPT came, virtually every single piece of text returned on the first page of Google's search results was machine generated, with false, sponsored, and sometimes downright dangerous content. That article you've read on the pros and cons of different dental crowns was pure bullshit cooked up by an AI ten times worse than ChatGPT.

What ChatGPT gives you is a much more comfortable experience than Google or other search engines. You ask it a question, it gives you an answer, no need to wade through countless "The Best" articles and flashy ads, no need to do trial and error runs of different ways to express your question as a search query that Google will properly understand. But LLMs are already eating themselves. It took maybe a week after ChatGPT's launch for humans to completely abandon any effort to write texts themselves. Add to that the fact that most texts on the Internet were machine generated already, and soon everything you'll read will be false machine generated content written by an LLM trained on false machine generated content. A recursion of failure, not unlike my beloved circle of failure.

I know you may not be convinced, but let me assure you, there are many businesses around the world whose sole purpose is to create machine generated websites that dominate Google's search results across every field imaginable. You are reading nothing but bullshit. Don't get me wrong, I think ChatGPT is impressive, and I find it leaps and bounds better than search engines with regards to programming-related searches, I'm even working on some projects that use it, such as aiac, but I am very aware of its current shortcomings and dangers.

What Are We Getting Fired For Anyway?

Many people are worried that AI will replace their jobs, some justifiably and some not, in my opinion. We are being told, though, that this is a good thing; that getting replaced by contextless machines will "free us" to do what we really want; that it will bring a new age of creativity, efficiency and freedom upon humankind.

This is an even bigger pile of bullcrap than an AI-generated article about the best washing machines in 2023. The human population is growing more and more, what we need is more jobs, not less, so cut the crap.

When ChatGPT exploded, and all of my peers and colleagues started talking about how it will replace us, I disagreed, mostly because these LLMs do not understand context, do not understand how things work, and have a tendency to invent things that do not exist. It's quite common for ChatGPT to give completely wrong answers for programming questions. These LLMs are too eager to please, which means they'd rather give you a false answer than say "sorry, I don't know." I don't think this is a unique trait of ChatGPT, and I don't think it's a bug, I think it's a concious feature, though I don't have a good answer why.

Today, I am more convinced that LLMs will replace many programmers around the world, despite the many issues they have. The reason is that most businesses just don't fucking care. The average tech company doesn't need to create a product that works, it needs to create a product that kinda sorta looks like it works. If ChatGPT can spit out source code that seems to do what you told it to, someone will build their entire fucking business around it. Many someones. This doesn't necessarily mean that programmers will lose their current jobs, it can also mean that we may see an explosion of terribly-made software services polluting the world. And I, for one, am going back to sleep.