open ai Archives - AI News https://www.artificialintelligence-news.com/tag/open-ai/ Artificial Intelligence News Wed, 29 Jun 2022 12:00:36 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png open ai Archives - AI News https://www.artificialintelligence-news.com/tag/open-ai/ 32 32 AI learns how to play Minecraft by watching videos https://www.artificialintelligence-news.com/2022/06/29/ai-learns-how-to-play-minecraft-by-watching-videos/ https://www.artificialintelligence-news.com/2022/06/29/ai-learns-how-to-play-minecraft-by-watching-videos/#respond Wed, 29 Jun 2022 12:00:35 +0000 https://www.artificialintelligence-news.com/?p=12107 Open AI has trained a neural network to play Minecraft by Video PreTraining (VPT) on a massive unlabeled video dataset of human Minecraft play, while using just a small amount of labeled contractor data. With a bit of fine-tuning, the AI research and deployment company is confident that its model can learn to craft diamond... Read more »

The post AI learns how to play Minecraft by watching videos appeared first on AI News.

]]>
Open AI has trained a neural network to play Minecraft by Video PreTraining (VPT) on a massive unlabeled video dataset of human Minecraft play, while using just a small amount of labeled contractor data.

With a bit of fine-tuning, the AI research and deployment company is confident that its model can learn to craft diamond tools, a task that usually takes proficient humans over 20 minutes (24,000 actions). Its model uses the native human interface of keypresses and mouse movements, making it quite general, and represents a step towards general computer-using agents.

A spokesperson for the Microsoft-backed firm said: “The internet contains an enormous amount of publicly available videos that we can learn from. You can watch a person make a gorgeous presentation, a digital artist draw a beautiful sunset, and a Minecraft player build an intricate house. However, these videos only provide a record of what happened but not precisely how it was achieved, i.e. you will not know the exact sequence of mouse movements and keys pressed.

“If we would like to build large-scale foundation models in these domains as we’ve done in language with GPT, this lack of action labels poses a new challenge not present in the language domain, where “action labels” are simply the next words in a sentence.”

In order to utilise the wealth of unlabeled video data available on the internet, Open AI introduces a novel, yet simple, semi-supervised imitation learning method: Video PreTraining (VPT). The team begin by gathering a small dataset from contractors where it records not only their video, but also the actions they took, which in its case are keypresses and mouse movements. With this data the company can train an inverse dynamics model (IDM), which predicts the action being taken at each step in the video. Importantly, the IDM can use past and future information to guess the action at each step.

The spokesperson added: “This task is much easier and thus requires far less data than the behavioral cloning task of predicting actions given past video frames only, which requires inferring what the person wants to do and how to accomplish it. We can then use the trained IDM to label a much larger dataset of online videos and learn to act via behavioral cloning.”

VPT paves the path toward allowing agents to learn to act by watching the vast numbers of videos on the internet, according to Open AI.

The spokesperson said: “Compared to generative video modeling or contrastive methods that would only yield representational priors, VPT offers the exciting possibility of directly learning large scale behavioral priors in more domains than just language. While we only experiment in Minecraft, the game is very open-ended and the native human interface (mouse and keyboard) is very generic, so we believe our results bode well for other similar domains, e.g. computer usage.”

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post AI learns how to play Minecraft by watching videos appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/06/29/ai-learns-how-to-play-minecraft-by-watching-videos/feed/ 0
OpenAI withholds its latest research fearing societal impact https://www.artificialintelligence-news.com/2019/02/15/openai-latest-research-societal-impact/ https://www.artificialintelligence-news.com/2019/02/15/openai-latest-research-societal-impact/#respond Fri, 15 Feb 2019 14:04:29 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=4939 OpenAI has decided not to publish its latest research fearing its potential misuses and the negative societal impact that would have. The institute, backed by the likes of Elon Musk and Peter Thiel, developed an AI which can produce convincing ‘fake news’ articles. Articles produced by the AI writer can be on any subject and... Read more »

The post OpenAI withholds its latest research fearing societal impact appeared first on AI News.

]]>
OpenAI has decided not to publish its latest research fearing its potential misuses and the negative societal impact that would have.

The institute, backed by the likes of Elon Musk and Peter Thiel, developed an AI which can produce convincing ‘fake news’ articles.

Articles produced by the AI writer can be on any subject and merely require a brief prompt before it gets to work unsupervised.

The AI scrapes data from ~8 million webpages and solely looks at those posted to Reddit with a ‘karma’ of three or more. That check means the article resonated with some users, although for what reason it cannot be sure.

Often, the resulting text – generated word-by-word – is coherent but fabricated. That even includes ‘quotes’ used in the article.

Here’s a sample provided by OpenAI:

Most technologies can be exploited for harmful purposes, but that doesn’t mean advancements should be halted. Computers have enriched our lives but stringent laws and regulations have been needed to limit their more sinister side.

Here are some ways OpenAI sees advancements like its own benefiting society:

  • AI writing assistants
  • More capable dialogue agents
  • Unsupervised translation between languages
  • Better speech recognition systems

In contrast, here are some examples of negative implications:

  • Generate misleading news articles
  • Impersonate others online
  • Automate the production of abusive or faked content to post on social media
  • Automate the production of spam/phishing content

Some advancements we don’t thoroughly understand their impact until they’ve been developed. On producing his famous equation, Einstein didn’t expect it to one day be used to construct nuclear weapons.

Hiroshima will remain among the worst man-made disasters history and we can hope it continues to serve as a warning about nuclear weapon use. There is rightfully a taboo around things designed to cause bloodshed, but societal damage can also be devastating.

We’re already living in an age of bots and disinformation campaigns. Some are used by foreign nations to influence policy and sow disorder, while others are created to spread fear and drive agendas.

Because these campaigns are not designed to kill, there’s more disassociation from their impact. In the past year alone, we’ve seen children being split from their families at borders and refugees ‘waterboarded’ at school by fellow students due to deceitful anti-immigration campaigns.

Currently, there’s at least a moderate amount of accountability with such campaigns. Somewhere along the line, a person has produced the article being read and can be held accountable for consequences if misinformation has been published.

AIs like the one created by OpenAI makes it a lot more difficult to hold someone accountable. Articles can be mass published around the web to change public opinions around a topic, and that has terrifying implications.

The idea of fabricated articles, combined with DeepFake images and videos, should be enough to send a chill down anyone’s spine.

OpenAI has accepted its own responsibility and made the right decision not to make its latest research public at this time. Hopefully, other players follow OpenAI’s lead in considering implications.

Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.

The post OpenAI withholds its latest research fearing societal impact appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2019/02/15/openai-latest-research-societal-impact/feed/ 0