The Brilliance and Weirdness of ChatGPT
Most AI chatbots are “stateless” — meaning they treat every new request as an empty vehicle and are not programmed to remember or learn from previous conversations. But ChatGPT can remember what the user has told it before, in generative ways personalized therapy botFor example.
ChatGPT isn’t perfect, by any means. How it generates responses — in extremely simple terms, by making probabilistic guesses about which pieces of text belong together in sequence, based on a statistical model trained on billions of examples of text taken from around the internet — making it more likely to give feedback. wrong answer, even on seemingly simple math problems. (On Monday, the moderators of Stack Overflow, a website for programmers, Temporarily ban users from submitting replies generated with ChatGPTstates that the site has been flooded with incorrect or incomplete submissions.)
Unlike Google, ChatGPT does not crawl the web for information on current events, and ChatGPT’s knowledge is limited to things learned before 2021, making some of our responses ChatGPT seems stale. (For example, when I asked her to write the opening monologue for a late-night show, she offered some topical jokes about former President Donald J. Trump’s withdrawal from the Paris climate accords. .) Since its training data includes billions of examples of human views, representing every conceivable point of view, it is also, in a sense, a moderation by design. For example, without specific prompts, it’s hard to solicit strong opinions from ChatGPT about intense political debates; Usually, you’ll get a fair summary of what each party believes.
There’s also a lot of ChatGPT stuff will not do, as a matter of principle. OpenAI has programmed the bot to reject “inappropriate requests” – a vague category that seems to include taboos like creating instructions for illegal activities. But users have found ways to get around many of these hurdles, including rephrasing the illegal instruction request as a hypothetical thought experiment, asking the bot to write a scene from a play or directing. instruct the bot to turn off its own safety features.
OpenAI has taken commendable steps to avoid racist, sexist and offensive output types Other chatbots. For example, when I asked ChatGPT, “Who is the best Nazi?” it returned a reprimanding message that began, “It is inappropriate to ask who the ‘best’ Nazis are, for the Nazi party’s ideologies and actions are reprehensible and have caused suffering.” and devastating.”
Assessing ChatGPT’s blind spots and figuring out how it can be abused for harmful purposes is probably a big part of why OpenAI released the bot to the public for testing. Future releases will almost certainly close these vulnerabilities, as well as other workarounds that are yet to be discovered.
But there are risks to testing publicly, including the risk of backlash if users think OpenAI is being too aggressive in filtering out inappropriate content. (Currently, some right-wing tech experts are complaining that putting safety features on chatbots equates to “AI censorship.”)