The AI, GPT-2, was originally designed to answer questions, summarize stories and translate texts. But researchers fear it can be used to probe large quantities of misinformation. Instead, we only find it used for such things as practicing text game adventures and writing stories about unicorns. In its blog post, OpenAI said it hopes the full version will help researchers develop better models of text-based and word-based prevention. "We are launching this model to help conduct research on synthetic text discovery," OpenAI wrote. But some argue that this technology will come whether we want it or not and OpenAI should immediately share its work so researchers can develop tools to combat, or at least detect, generated bot text. Others have suggested that this is all a plan to hype up the GPT-2. Regardless, and for better or worse, the GPT-2 is no longer under lock and key.