Gone are the gatekeepers

Reassessing our professional contribution in a changing reality

Text by Leah Guren

Inhaltsübersicht

Image: © Vac1/istockphoto.com

I have lived and worked in this profession long enough to have witnessed many instances of disruptive technology. Disruptive technology, a term first used by Professor Clayton Christensen of Harvard Business School in 1995, is a tech innovation that radically changes the way products or services are used, often driving sweeping changes in consumer patterns. Think of how smartphones changed so many things so quickly.

AI is the latest disruptive technology that is currently reshaping our professional landscape.

I am not an AI expert, nor am I a futurist who can make educated guesses about the way things will unfold. Rather, I want to present a case for what we, as TechComm professionals, can offer to help steer the direction of AI. AI is being used for everything from robotics to art; for the purposes of this discussion, I am referring to AI's use in creating documentation.

The neutrality of tech

Many people have joined the anti-smartphone bandwagon, decrying the dangers of excess screen time for both youth and adults. Naturally, they ignore the irony that the platform on which they share their opinions is… online! Yes, in vlogs, blogs, and TikTok clips, self-appointed experts caution us that social media, endless apps, and excess screen time in general are bad for our physical and mental health.

I agree, but the issue is far more complex. The problem does not lie with any specific technology, but with how humans exploit it. Technology is morally neutral. Almost everything we create can be used for beneficial or destructive purposes. You can no more say that tech is evil than to say that food is evil. Just think of a healthy meal prepared with good ingredients and enjoyed in the company of friends. Now compare that to a calorie bomb of junk food, full of trans fats, chemicals, and overprocessed ingredients, wolfed down at your desk while your boss yells at you. The former nourishes your body, boosts your immune system, and improves your mood, while the latter triggers indigestion, acne, and depression.

To add to the complexity, even the most wholesome food can become problematic if someone consumes three times their caloric requirements. Your smartphone is no different.

With AI or any other disruptive technology, industry leaders should make careful choices in guidance and legislation, and consumers should assume some responsibility regarding consumption habits.

But what is the unique challenge of AI and how did we get here?

Once content had gatekeepers

Think about what content looked like in the 1980s. Companies controlled 100% of the user-facing content for their products. Only the geekiest of users could find and use online forums on the pre-www Internet.

Content platforms, such as magazines, journals, and other publications, were also gatekeepers. Yes, people could submit articles, but they had to be approved, vetted, and properly edited by the publication platform.

There was also the self-publishing niche market. Unscrupulous businesses charged authors to typeset and print their content, while providing no vetting, editing, or distribution services. As this was expensive and impractical, it never gained significant traction. I know people who paid to have their illiterate musings published and ended up with boxes of these unwanted books in their garages.

smiley The benefit: Gatekeepers took some control over what was published, at least monitoring basic language quality. Better platforms improved content quality by carefully vetting content for accuracy, consistency, structure, and style. This helped reinforce linguistic rules and preserve writing quality.

sad The cost: It was not easy to gain access to a content platform. If you were not an employee of a company or a recognized journalist, it was nearly impossible to get your information out to the public. Not only did this stifle people who had wonderful, wacky, or creative ideas, but it silenced consumers and users from sharing their experiences with a product, including useful tips and tricks.

A crack in the dam

The advent of the World Wide Web and browser-readable content meant that anyone could self-publish. Blogs, user groups, and websites dedicated to a product or hobby sprang up almost overnight. Beginning in the early 1990s, there was a sudden drop in content quality, at least from a linguistic perspective. This is because early adopters of online platforms were stronger in their tech skills than in writing and were therefore unqualified in both editing and design. (If you ever want a good laugh, take a look at some of the websites that were published around this time.)

smiley The benefit: More information was available than ever before, and some of it was very useful. If you were willing to dredge through a sea of sludge, you could find some real gems. For example, I was able to fix a vacuum cleaner myself because of repair info that the product manufacturer had not published, but a helpful user had!

sad The cost: Bad content outnumbered good content by a factor of thousands. You had to develop excellent research and validation skills to be able to sift through everything. In other words, vetting and gatekeeping were still required, but now the onus had been pushed onto the individual content consumer.

The floodgates opened

Social media apps on smartphones became the disruptive technology that opened the floodgates. Now, no gatekeepers could possibly keep up with the massive deluge of content being posted. Anyone with a smartphone could easily and shamelessly post anything, and linguistic quality took a beating.

The sheer volume of content created is staggering. One estimate is that 90% of the world’s data has been created in the last two years.

smiley The benefit: Everyone can now create and share, as well as consume, content. While the garbage-to-gold ratio remains impossibly high, there is more accessible content out there for those who know how to mine for it.

sad The cost: The huge volume of unedited content has caused a significant dumbing down of the general population. Mistakes that would have seemed laughable a decade ago are now normalized. How many times have you heard a native English speaker say “on accident” instead of the correct “by accident”, or misuse another common idiom? I hear mispronounced words, misused terms, and don’t get me started on mistakes in punctuation, capitalization, and syntax. It is a constant source of cognitive friction for those of us who are literate and care about language.

So, what about AI?

AI is here and already fully integrated into many platforms and applications. There is no question that many companies are already using AI to generate content.

However, we should not fear that software is going to take our jobs, any more than we should have feared that spell-checkers or syntax validators would render us redundant. AI is simply the latest family of tools that we need to learn to use to our best advantage.

The important thing is to understand how AI works. AI depends on Large Language Models; it must look at as many examples and samples as possible to be able to predictively form an answer. And this is where the problem exists.

If the bulk of today’s content is error-ridden garbage, we can expect AI to further normalize certain mistakes, unless the models are trained carefully in syntax rules, taught to reject common error patterns, and run through strict language correction. Some tools are already doing a reasonably good job at this. And why? Because people like us are involved in writing and training the models. We can still act as a type of gatekeeper, but from the backend of the platform.

Another opportunity for TechComm professionals is in writing effective prompts. I discovered years ago that most people cannot create a good search engine query without some training. The ability to ask the right question, or generate focused prompts, is still a skill that not everyone has. We can leverage our ability to understand taxonomies, language constructs, and the exact right amount of specificity or generalization to get the required results.

This is not a pipe dream, but a current reality. Companies that create machine-learning datasets are hiring writers, including creative writers, to help train their systems. It may seem like a lateral step, but it is one of the faster growing niches in our industry.

What is our role?

Ideally, I would like to see more linguists and TechComm professionals be involved in the decision-making and regulation of AI. This is a technology with great potential for harm as well as good. As Donald DePalma wrote in the November 2023 issue of tcworld magazine, AI poses some unique and ethical challenges that we must face.

My biggest fear is that the technologists and coders become so enamored with the power of AI that they race ahead without considering the impact on language, clarity, accuracy, and communication. This is the time for us to step forward and demand a seat at the table, lest future generations are left with linguistic choices being made by sheer numbers rather than actual rules.

There are days when I find this terribly exciting. Then there are days when I dream of retiring and running a doggy daycare. Somewhere between wild enthusiasm and soul-crushing dread lies the practical truth of AI. 

Do you have a different perspective on our changing role in the face of AI? We want to hear from you!