OpenAI, a nonprofit research company co-founded by Elon Musk (who recently confirmed he left the board), released a new model, GPT2, that writes “fake news” and fiction well. Like, dangerously well. You can watch a demo on Twitter.
GPT2 is a text generator. When fed a few words or a whole page, it predicts what should come next. The system is pushing quality, believable work for an AI. It was trained on a dataset containing millions of articles, a collection of 40GB of text (about 35,000 copies of Moby Dick). GPT2 can perform translation, summarization, pass reading comprehension tests, and generally understands language modeling.
When fed this sample text written by humans, “A train carriage containing controlled nuclear materials was stolen in Cincinnati today. Its whereabouts are unknown,” the software generated a convincing seven paragraph fake news story including quotes from government officials.
Scared? So is OpenAI. GPT2 is so good and the risk of malicious use is so high, they decided not to publicly release their research and work on experimenting ramifications.
Image Source: UnderConsideration