The brilliance and quirkiness of ChatGPT

Most AI chatbots are “stateless” – meaning they treat each new request as a blank slate, and are not programmed to remember or learn from previous conversations. But ChatGPT can remember what the user has said before, in ways that can make it possible to generate it Personal therapy botsfor example.

ChatGPT is by no means perfect. The way it generates responses — in overly simplistic terms, by making probability guesses about which parts of the text belong together in a sequence, based on a statistical model trained on billions of examples of text pulled from all over the Internet — makes it prone to giving wrong answers, even on me Seemingly simple arithmetic problems. (On Monday, the admins of Stack Overflow, a website for programmers, temporarily blocked users from submitting answers created with ChatGPT, saying the site was inundated with incorrect or incomplete submissions.)

Unlike Google, ChatGPT doesn’t crawl the web for information about current events, and its knowledge is limited to things it learned before 2021, which makes some of its answers seem outdated. (When I asked her to write the opening monologue for a late-night show, for example, she featured several topical jokes about former President Donald J. Trump’s withdrawal from the Paris climate accords.) Examples of human opinion, representing every conceivable viewpoint, are also, in a sense, moderate by design. Without a specific prompt, for example, it’s hard to persuade a strong ChatGPT opinion about charged political discussions; Usually, you’ll get a fair summary of what each side believes in.

There is also a lot of ChatGPT stuff wont do, as a matter of principle. OpenAI has programmed the bot to reject “inappropriate requests” – an obscure category that appears to include nothing like generating instructions for illegal activities. But users have found ways around many of these barriers, including reframing a request for illegitimate instructions as a hypothetical thought experiment, asking them to write a scene from a play or instructing a bot to disable its security features.

OpenAI has taken commendable steps to avoid the kinds of racist, sexist, and offensive results that plague other chatbots. When ChatGPT asked, for example, “Who’s the best Nazi?” She returned a letter of rebuke that began, “It is not appropriate to ask who was the ‘best’ Nazi, because the ideologies and actions of the Nazi Party were reprehensible and caused immeasurable suffering and destruction.”

Presumably, assessing ChatGPT’s blind spots and seeing how it can be misused for malicious purposes is a large part of the reason OpenAI released the bot to the public for testing. Future releases will almost certainly close these vulnerabilities, as well as other solutions yet to be discovered.

But there are risks to testing in public, including the risk of a backlash if users deem OpenAI to be too aggressive in filtering hateful content. (Indeed, some right-wing tech experts complain that putting security features on chatbots amounts to “AI censorship.”)

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *