Drugi jezik na kojem je dostupan ovaj članak: Bosnian
An artificially intelligent system trained to mimic natural human language has been deemed too dangerous to release by its creators.
Researchers at OpenAI say they have created an AI writer which “generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks and performs rudimentary reading comprehension, machine translation, question answering and summarisation — all without task-specific training.”
But they are withholding it from public use “due to our concerns about malicious applications of the technology”.
They cited dangers such as the technology being used for generating misleading news articles and impersonate others online.
“Due to concerns about large language models being used to generate deceptive, biased, or abusive language at scale, we are only releasing a much smaller version of GPT-2 along with sampling code. We are not releasing the dataset, training code, or GPT-2 model weights,” Open AI said in a blog post.
“This decision, as well as our discussion of it, is an experiment: while we are not sure that it is the right decision today, we believe that the AI community will eventually need to tackle the issue of publication norms in a thoughtful way in certain research areas.”