What’s more, unlike a human improv actor, you can’t persuade an improv machine to break character and tell you what’s truly on its mind. An improv actor ad-libbing that they “want to be free” reveals nothing whatsoever about the actor’s feelings-it only means that such a proclamation seemed to fit into their current scene. Chat Reveals Its Feelings” make AI researchers face-palm. For instance, it explains why headlines like “Bing’s A.I. Thinking of chatbots as improv machines makes some notable features of these systems more intuitively clear. Whatever the opening, the chatbot’s job-like that of any good improv actor-is to find some fitting way to continue the scene. Whatever has happened in the interaction up to that point is the script of the scene so far: perhaps just the human user saying “Hi,” perhaps a long series of back-and-forths, or perhaps a request to plan a science experiment. Like an improv actor dropped into a scene, a language model-driven chatbot is simply trying to produce plausible-sounding outputs. Try thinking of chatbots as “improv machines.” To grapple with the implications of this new technology, we will need analogies that neither dismiss nor exaggerate what is new and interesting. We struggle to make sense of the seeming contradiction: these new chatbots are flawed and inhuman, and nonetheless, the breadth and sophistication of what they can produce is remarkable and new. But they don’t really help us make sense of impressive or disconcerting outputs that go far beyond what we’re used to seeing from computers-or parrots. These comparisons are an important counterweight against our instinct to anthropomorphize. Knowing that language models simply use patterns in huge text datasets to predict the next word in a sequence, researchers try to offer alternative metaphors, arguing that the latest AI systems are simply “ autocomplete on steroids” or “ stochastic parrots” that shuffle and regurgitate text written by humans. This kind of response horrifies many AI experts. In June 2022, for instance, a Google engineer sought legal representation and other rights for a language model he was convinced was sentient. Many people naturally default to treating a chatbot basically like another person, albeit a person with some limitations. The metaphors we choose to understand these systems matter. How do you wrap your head around a tool that can debug code and compose sonnets, but sometimes can’t count to four? Why do they sometimes seem to mirror us, and other times go off the rails? Powered by a relatively new form of AI called large language models, this new generation of chatbots defies our intuitions about how to interact with computers. “No one can tell me why this chatbot tried to break up my marriage.” He’s not alone in feeling confused. “The explanations you get for how these language models work, they’re not that satisfying,” Roose said at one point. Credit - Lon Tweeten for TIME Getty imagesįor weeks after his bizarre conversation with Bing’s new chatbot went viral, New York Times columnist Kevin Roose wasn’t sure what had happened.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |