How AI Knows Things No One Told It Researchers are still struggling to understand how AI models trained to parrot internet text can perform advanced tasks such as running code, playing games and trying to break up a marriage ........ “Everything we want to do with them in order to make them better or safer or anything like that seems to me like a ridiculous thing to ask ourselves to do if we don’t understand how they work,” says Ellie Pavlick of Brown University, one of the researchers working to fill that explanatory void. ........ GPT (short for generative pretrained transformer) .......... The models rely on a machine-learning system called a neural network. Such networks have a structure modeled loosely after the connected neurons of the human brain. The code for these programs is relatively simple and fills just a few screens. It sets up an autocorrection algorithm, which chooses the most likely word to complete a passage based on laborious statistical analysis of hundreds of gigabytes of Internet text. Additional training ensures the system will present its results in the form of dialogue. In this sense, all it does is regurgitate what it learned—it is a “stochastic parrot,” in the words of Emily Bender, a linguist at the University of Washington. But LLMs have also managed to ace the bar exam, explain the Higgs boson in iambic pentameter, and make an attempt to break up their users’ marriage. Few had expected a fairly straightforward autocorrection algorithm to acquire such broad abilities............ That GPT and other AI systems perform tasks they were not trained to do, giving them “emergent abilities,” has surprised even researchers who have been generally skeptical about the hype over LLMs. “I don’t know how they’re doing it or if they could do it more generally the way humans do—but they’ve challenged my views,” says Melanie Mitchell, an AI researcher at the Santa Fe Institute. ............. The philosopher typed in a program to calculate the 83rd number in the Fibonacci sequence. “It’s multistep reasoning of a very high degree,” he says. And the bot nailed it. When Millière asked directly for the 83rd Fibonacci number, however, GPT got it wrong: this suggests the system wasn’t just parroting the Internet. Rather it was performing its own calculations to reach the correct answer. ........... Researchers are finding that these systems seem to achieve genuine understanding of what they have learned. ......... The researchers concluded that it was playing Othello roughly like a human: by keeping a game board in its “mind’s eye” and using this model to evaluate moves. Li says he thinks the system learns this skill because it is the most parsimonious description of its training data. “If you are given a whole lot of game scripts, trying to figure out the rule behind it is the best way to compress,” he adds. ............ The system had no independent way of knowing what a box or key is, yet it picked up the concepts it needed for this task. ......... Researchers marvel at how much LLMs are able to learn from text. ......... the wider the range of the data, the more general the rules the system will discover. ....... “Maybe we’re seeing such a huge jump because we have reached a diversity of data, which is large enough that the only underlying principle to all of it is that intelligent beings produced them,” he says. “And so the only way to explain all of this data is [for the model] to become intelligent.” ...........
LLMs do, in fact, learn from their users’ prompts—an ability known as “in-context learning.”“It’s a different sort of learning that wasn’t really understood to exist before,” says Ben Goertzel, founder of the AI company SingularityNET. ............ Its outputs are determined by the last several thousand words it has seen. ......... Entire websites are devoted to “jailbreak” prompts that overcome the system’s “guardrails”—restrictions that stop the system from telling users how to make a pipe bomb, for example—typically by directing the model to pretend to be a system without guardrails. Some people use jailbreaking for sketchy purposes, yet others deploy it to elicit more creative answers. “It will answer scientific questions, I would say, better” than if you just ask it directly, without the special jailbreak prompt ............ “It’s better at scholarship.” ........ Another type of in-context learning happens via “chain of thought” prompting, which means asking the network to spell out each step of its reasoning—a tactic that makes it do better at logic or arithmetic problems requiring multiple steps. (But one thing that made Millière’s example so surprising is that the network found the Fibonacci number without any such coaching.) ........... in-context learning follows the same basic computational procedure as standard learning, known as gradient descent. This procedure was not programmed; the system discovered it without help. .............
LLMs may have other latent abilities that no one has discovered yet. “Every time we test for a new ability that we can quantify, we find it”.
A hero's strength is yours with ChatGPT. paramendra.mykajabi.com♬ original sound - Paramendra Kumar Bhagat
ChatGPT Literacy (4) #ChatGPT #GPT4 #AI #OpenAI #artificialintelligence♬ original sound - Paramendra Kumar Bhagat
ChatGPT Explains Why AIs like ChatGPT Should Be Regulated We asked the text-generating AI ChatGPT to talk about its own flaws ....... When we asked ChatGPT to generate a Scientific American editorial, it produced a thoughtful-sounding essay. ........... Despite sounding more sophisticated and realistic than perhaps any language model before it, ChatGPT cannot actually think for itself—and it can produce falsehoods and illogical statements that merely look reasonable. .............. ChatGPT is capable of spouting incorrect information with apparent confidence. ......... the potential for ChatGPT to be used for nefarious purposes, such as impersonating individuals or spreading misinformation. .......... Overly strict regulations could stifle innovation and prevent the technology from reaching its full potential. On the other hand, insufficient regulation could lead to abuses of the technology. .......... the creation of ChatGPT and other large language models that are trained on vast amounts of text generated by human writers. These writers, who may include novelists, journalists, and bloggers, receive no compensation for the use of their work in training these language models. .......... ChatGPT and other language models could be used to generate fake news articles, impersonate individuals, or spread other forms of misinformation. ......... In the case of a question like “how to change a lightbulb,” it’s possible that ChatGPT may provide a response that is accurate and helpful. However, it’s also possible that the model may provide an inaccurate or incomplete response, in which case it would be up to the user to use their own judgment and verify the information provided by the model. ............ Large language models like ChatGPT are trained to generate text that is fluent and coherent, but they may not always be able to generate responses that are as nuanced or creative as those written by a human. ....... prompts written by large language models may include repetitions or unusual combinations of words and phrases. This is because language models are trained to generate text by identifying patterns and regularities in the data, and may sometimes produce responses that contain repetitive or unusual elements. ......... As a large language model trained by OpenAI, I do not have personal experiences or the ability to browse the internet.