A new word smith is here to help you
"The more you practice, the better you'll get. It's really that simple." The last line of this article will tell you why this quote is not as simple as it seems
This is an actual event. A few years ago, a friend of mine bought the latest model iPhone as soon as it was released and began setting it up with all the apps he wanted. That done, he started testing Siri, the intelligent virtual assistant.
But he experienced trouble as Siri kept on telling him: “Sorry, I don’t understand what you are saying.”
Siri had difficulty following my friend’s Australian accent. A few more tries, and my friend lost his cool, leading to few expletive-laden shouts.
That did the trick. Siri understood it well and replied: “What did I do to deserve this?”
It was as human as it could get, and both of us burst into laughter. Apart from the hilarity of the incident, what struck us was the cleverness of the artificial intelligence (AI) powering Siri to respond as it did.
AI has since advanced far beyond Siri and Alexa, though it can still provide entertainment with its quirky answers to questions that are a bit complex than, say, how many goals did Pele score or who is Sachin Tendulkar.
AI is now active many areas, from analysing medical data to aiding doctors and improve robots used in surgeries, in self-driven cars, and even keeping track of the health of our planet through satellites.
Imagine if there was an AI-driven robot to help us with day to day chores like cooking and cleaning. Or find answers to all the problems that nag us. Like, ’Why are some people so stupid?’
Sure, you can Google that question, and a list of sites will pop up, which you can scroll to find the answer you seek. But you wish, there was one site that will give straight answers, however tricky and weird the question is.
Or how about this? You want to write about something that popped into your mind. If only there were a program that could do it for you, saving all that time needed to collect the data, check the details, find the correct links, instead of struggling with it as broken thoughts keep interrupting the process, or a phone call draws you away from the task… only if wishes were horses.
Well, it does exist.
One of the latest sensations among the AI experts is the Generative Pre-trained Transformer (GPT-3) program, which has reached this level of performance, surprising even those who invented it. Created by OpenAI, a San Francisco-based artificial intelligence research lab founded by Elon Musk, this program is one of those neural network programs. Too technical? In simple words, it is wired like our brain, to collect, analyse, interpret, and respond to tasks.
The basic idea behind the development of such programs was to create chatbots companies like Google and banks use to provide more human-like responses to the customers. Siri and Alexa also do the same thing.
But modern-day AI program can read and analyse much more. GPT-3, for example, has crunched through billions of Wikipedia articles and over 67 billion books, analysed all the words to mimic the way we write. The program is built to predict the next word in a sequence of words and uses billions of parameters.
Many such attempts to make computers read and write have not achieved great success. Anyone who has used Google Translate to understand something in a different language will know how unsatisfactory the results have been. However, it has improved vastly from its early days.
During a trial run, GPT-3 was assigned the role of a psychologist and asked: “How do we become more creative?”and it wrote this as reply:
“I think creative expression is a natural byproduct of growing up in a diverse world. The more diverse the world is, the more you get exposed to different people, to different opportunities, to different places and to different challenges.
“And the more diverse that is, the more likely you’ll be to be able to put the dots together to form something new. And in many ways, I think if you want to be creative, you have to go for it. If you want to be a writer, you have to write, if you want to be a musician, you have to create music, if you want to be a comedian, you have to create comedy, and the more you create, the more likely it is that you’ll stumble onto some interesting stuff.
Yes, the two paragraphs were written by a computer, not by a human.
More surprises were in store. After some simple mobile app codes were fed to GPT-3, it proved it could design similar programs. The program had expanded enough to do this through machine learning, or by teaching itself.
Then they tried fiction writing. Just write one or two lines and give it a prompt. Viola! It was all done in a few seconds.
“The model can create original long-form text, such as an essay or article, in less than 10 seconds, given a one-sentence prompt. The best essays written by the model fooled 88% of people into believing that they were written by humans,” reports Data Driven Investor.
A student at the University of California went further. Liam Porr started a blog by posting text generated by the program.
“I would write the title and introduction, add a photo, and let GPT-3 do the rest. The blog has had over 26 thousand visitors, and we now have about 60 loyal subscribers,” he wrote in his newsletter. The articles were published in Hacker News, which is widely read by Silicon Valley dudes and no one suspected it was machine generated text.
It will have a major impact on all kinds of writing, says techies who are enthusiastic about this though some do see dark clouds ahead. Anyone can write essays, poems, and books with the help of this program.
“For example, after analyzing thousands of poems and poets, you can simply input the name of a poet, and GPT-3 can create an original poem similar to the author's style. GPT-3 replicates the texture, rhythm, genre, cadence, vocabulary, and style of the poet's previous works to generate a brand-new poem,” says Twilo, a US-based cloud communication firm.
But that is raising alarms already. (If you are involved in a job involving writing and editing, this blog is worth a read)
An example of how it could be misused is already there. A game called AI Dungeon used this program, which is not in the open market, but available for commercial use, for its machine-learning text adventure. Thousands of subscribers build their fantasy worlds in this game.
The game had allowed private groups to generate sexually explicit content, but it soon veered off into material involving pedophilia. Remember, the program has read all kinds of material out there and learned to mimic the good and the bad.
The furore it created led the gaming company to kick out some subscribers and add more safety filters to prevent such occurrences.
The arrival of all technologies raises such red flags. It started with the humble bicycle. When cycles became popular, the New York Journal of Commerce complained in 1896 that restaurants and theatres were losing US$100 million as people travelled further in their spare time.
But the speed at which the internet and related technology is evolving does raise some concerns. As sociologist Zeynep Tufekci – whom The New York Times rightly tags as a person who is far ahead in seeing the future– had written in 2015 that codes are getting written so fast without any thinking about its safety aspects, and points to the recent attack on a major gas pipeline in the US as an example of this.
The tech companies in Silicon Valley have realised the unexpected consequences of the unbridled race to develop internet businesses. They are now trying hard to incorporate safety measures into their products and services.
The problem, however, is that they can only put in safety measures against misuse they can anticipate. The real worry is the unforeseen consequences that may emerge.
Consider social media. Platforms like Facebook or Instagram were well-intentioned. They aimed to enable people to connect with each other and expand their horizons. It helped many noble ventures; small businesses, and entrepreneurs gained by it. But no one had foreseen their dark side — how these platforms could be used even to manipulate the election in a foreign country, as Mark Zuckerberg admitted before the US Senate.
“It’s clear now that we didn’t do enough to prevent these tools from being used for harm,” he said. “That goes for fake news, foreign interference in elections, and hate speech, as well as developers and data privacy.”
Now the genie is out of the bottle. From the US to India, democracies across the world are facing threats as domestic and foreign players find ways to exploit the situation.
Throw into this mix the artificial intelligence that can produce, promote and control much more efficient products and services. Fake narratives could be generated in seconds and thousands of such articles, essays and fake studies could fill the cyber space. Like counterfeit notes, it would harder and harder to detect as time goes by.
What happens if it is weaponised in a manner we can’t foresee? Or even scarier, what guarantee is there that these programs themselves become more intelligent than we want.
Science fiction doyen Arthur C Clarke predicted decades back that one day there will be computers that will be more intelligent than human beings.
Some of Clarke’s predictions were way off the mark. But some did turn out to be prophetic, like the emergence of internet (watch the video that follows the link above).
So are these warnings about technology running riot mere alarmist predictions? Maybe. But again Clarke had pointed out the flaw when predicting the future.
“So if what I say now seems to you to be very reasonable, then I’ll have failed completely. Only if what I tell you appears absolutely unbelievable, have we any chance of visualizing the future as it really will happen.”
The clue to the blurb: The quote is from a blog written by GPT-3.
–––––––––––––––––––––––––––––––––––––––––––––––––––––––––Please leave your comments and suggestions on the site by clicking on the reply icon at the bottom. Thank you.
Stunning. Scary too.