elon musk Archives - AI News https://www.artificialintelligence-news.com/tag/elon-musk/ Artificial Intelligence News Wed, 01 Jun 2022 11:51:36 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png elon musk Archives - AI News https://www.artificialintelligence-news.com/tag/elon-musk/ 32 32 Gary Marcus criticises Elon Musk’s AGI prediction https://www.artificialintelligence-news.com/2022/06/01/gary-marcus-criticises-elon-musk-agi-prediction/ https://www.artificialintelligence-news.com/2022/06/01/gary-marcus-criticises-elon-musk-agi-prediction/#respond Wed, 01 Jun 2022 11:51:35 +0000 https://www.artificialintelligence-news.com/?p=12030 Gary Marcus has criticised a prediction by Elon Musk that AGI (Artificial General Intelligence) will be achieved by 2029 and challenged him to a $100,000 bet. Marcus founded RobustAI and Geometric Intelligence (acquired by Uber), is the Professor Emeritus of Psychology and Neural Science at NYU, and authored Rebooting.AI. His views on AGI are worth... Read more »

The post Gary Marcus criticises Elon Musk’s AGI prediction appeared first on AI News.

]]>
Gary Marcus has criticised a prediction by Elon Musk that AGI (Artificial General Intelligence) will be achieved by 2029 and challenged him to a $100,000 bet.

Marcus founded RobustAI and Geometric Intelligence (acquired by Uber), is the Professor Emeritus of Psychology and Neural Science at NYU, and authored Rebooting.AI. His views on AGI are worth listening to.

AGI is the kind of artificial intelligence depicted in movies like Space Odyssey (‘HAL’) and Iron Man (‘J.A.R.V.I.S’). Unlike current AIs that are trained for a specific task, AGIs are more like the human brain and can learn how to do tasks.

Most experts believe AGI will take decades to achieve, while some even think it will never be possible. In a survey of leading experts in the field, the average estimate was there is a 50 percent chance AGI will be developed by 2099.

Elon Musk is far more optimistic:

Musk’s tweet received a response from Marcus in which he challenged the SpaceX and Tesla founder to a $100,000 bet that he’s wrong about the timing of AGI.

AI expert Melanie Mitchell from the Santa Fe Institute suggested the bets are placed on longbets.org. Marcus says he’s up for the bet on the platform – where the loser donates the money to a philanthropic effort – but he’s yet to receive a response from Musk.

In a post on his Substack, Marcus explained why he’s calling Musk out on his prediction.

“Your track record on betting on precise timelines for things is, well, spotty,” wrote Marcus. “You said, for instance in 2015, that (truly) self-driving cars were two years away; you’ve pretty much said the same thing every year since. It still hasn’t happened.”

Marcus argues that pronouncements like Musk is famous for can be dangerous and take attention away from the kind of questions that first need answering. 

“People are very excited about the big data and what it’s giving them right now, but I’m not sure it’s taking us closer to the deeper questions in artificial intelligence, like how we understand language or how we reason about the world,” said Marcus in 2016 in an Edge.org interview.

An incident in April, where a Tesla on Autopilot crashed into a $3 million private jet in a mostly empty airport, is pointed to as an example of why the focus needs to be on solving serious issues with AI systems before rushing to AGI:

“It’s easy to convince yourself that AI problems are much easier than they are actually are, because of the long tail problem,” argues Marcus.

“For everyday stuff, we get tons and tons of data that current techniques readily handle, leading to a misleading impression; for rare events, we get very little data, and current techniques struggle there.”

Marcus says that he can guarantee Musk won’t be shipping fully-autonomous ‘Level 5’ cars this year or next, despite what Musk said at TED2022. Unexpected outlier circumstances, like the appearance of a private jet in the way of a car, will continue to pose a problem to AI for the foreseeable future.

“Seven years is a long time, but the field is going to need to invest in other ideas if we are going to get to AGI before the end of the decade,” explains Marcus. “Or else outliers alone might be enough to keep us from getting there.”

Marcus believes outliers aren’t an unsolvable problem, but there’s currently no known solution. Making any predictions about AGI being achievable by the end of the decade before that issue is anywhere near solved is premature.

Along those same lines, Marcus points at how deep learning is “pretty decent” at recognising objects is but nowhere near as adept at human brain-like activities such as planning, reading, or language comprehension.

Here’s a pie chart used by Marcus of the kind of things that an AGI would need to achieve:

Marcus points out that he’s been using the chart for around five years and the situation has barely changed, we “still don’t have anything like stable or trustworthy solutions for common sense, reasoning, language, or analogy.”

Tesla is currently building a robot that claims to be able to perform mundane tasks around the home. Marcus is sceptical given the problems that Tesla is having with its cars on the roads.

“The AGI that you would need for a general-purpose domestic robot (where every home is different, and each poses its own safety risks) is way beyond what you would need for a car that drives on roads that are more or less engineered the same way from one town to the next,” he reasons.

Because AGI is still a somewhat vague term that’s open to interpretation, Marcus makes his own five predictions that AI will not be able to do by Musk’s 2029 prediction that AGI will be achieved:

Well then, Musk—do you accept Marcus’ challenge? Can’t say I would, even if I had anywhere near Musk’s disposable income.

(Photo by Kenny Eliason on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Gary Marcus criticises Elon Musk’s AGI prediction appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/06/01/gary-marcus-criticises-elon-musk-agi-prediction/feed/ 0
AI Day: Elon Musk unveils ‘friendly’ humanoid robot Tesla Bot https://www.artificialintelligence-news.com/2021/08/20/ai-day-elon-musk-unveils-friendly-humanoid-robot-tesla-bot/ https://www.artificialintelligence-news.com/2021/08/20/ai-day-elon-musk-unveils-friendly-humanoid-robot-tesla-bot/#respond Fri, 20 Aug 2021 13:23:59 +0000 http://artificialintelligence-news.com/?p=10935 During Tesla’s AI Day event, CEO Elon Musk unveiled a robot that is “intended to be friendly”. Musk has been one of the most prominent figures to warn that AI is a “danger to the public” and potentially the “biggest risk we face as a civilisation”. In 2017, he even said there was just a... Read more »

The post AI Day: Elon Musk unveils ‘friendly’ humanoid robot Tesla Bot appeared first on AI News.

]]>
During Tesla’s AI Day event, CEO Elon Musk unveiled a robot that is “intended to be friendly”.

Musk has been one of the most prominent figures to warn that AI is a “danger to the public” and potentially the “biggest risk we face as a civilisation”. In 2017, he even said there was just a “five to 10 percent chance of success [of making AI safe]”.

Speaking about London-based DeepMind in a New York Times interview last year, Musk said: “Just the nature of the AI that they’re building is one that crushes all humans at all games. I mean, it’s basically the plotline in ‘War Games’”.

Unveiling a 5ft 8in AI-powered humanoid robot may seem to contradict Musk’s concerns. However, rather than leave development to parties who he believes would be less responsible, Musk believes Tesla can lead in building ethical AI and robotics.

Musk has form in this area after co-founding OpenAI. The company’s mission statement is: “To build safe Artificial General Intelligence (AGI), and ensure AGI’s benefits are as widely and evenly distributed as possible.”

Of course, it all feels a little like building nuclear weapons to deter them—it’s an argument that’s sure to have some rather passionate views on either side.

During the unveiling of Tesla Bot, Musk was sure to point out that you could easily outrun and overpower it.

Tesla Bot is designed to “navigate through a world built for humans” and carry out tasks that are dangerous, repetitive, or boring. One example task is for the robot to be told to go to the store and get specific groceries.

Of course, all we’ve seen of Tesla Bot at this point is a series of PowerPoint slides (if you forget about the weird dance by a performer dressed as a Tesla Bot … which we’re all trying our hardest to.)

The unveiling of the robot followed a 90-minute presentation about some of the AI upgrades coming to Tesla’s electric vehicles. Tesla Bot is essentially a robot version of the company’s vehicles.

“Our cars are basically semi-sentient robots on wheels,” Musk said. “It makes sense to put that into humanoid form.”

AI Day was used to hype Tesla’s advancements in a bid to recruit new talent to the company. 

On its recruitment page, Tesla wrote: “Develop the next generation of automation, including a general purpose, bi-pedal, humanoid robot capable of performing tasks that are unsafe, repetitive or boring.

“We’re seeking mechanical, electrical, controls and software engineers to help us leverage our AI expertise beyond our vehicle fleet.”

A prototype of Tesla Bot is expected next year, although Musk has a history of delays and showing products well before they’re ready across his many ventures. Musk says that it’s important the new machine is not “super expensive”.

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post AI Day: Elon Musk unveils ‘friendly’ humanoid robot Tesla Bot appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/08/20/ai-day-elon-musk-unveils-friendly-humanoid-robot-tesla-bot/feed/ 0
Musk predicts AI will be superior to humans within five years https://www.artificialintelligence-news.com/2020/07/28/musk-predicts-ai-superior-humans-five-years/ https://www.artificialintelligence-news.com/2020/07/28/musk-predicts-ai-superior-humans-five-years/#comments Tue, 28 Jul 2020 12:17:59 +0000 http://artificialintelligence-news.com/?p=9769 Elon Musk has made another of his trademark predictions – this time, it’s that AI will be superior to humans within five years. Musk has been among the most vocal prominent figures in warning about the dangers of artificial intelligence. In 2018, for example, Musk famously warned that AI could become “an immortal dictator from... Read more »

The post Musk predicts AI will be superior to humans within five years appeared first on AI News.

]]>
Elon Musk has made another of his trademark predictions – this time, it’s that AI will be superior to humans within five years.

Musk has been among the most vocal prominent figures in warning about the dangers of artificial intelligence. In 2018, for example, Musk famously warned that AI could become “an immortal dictator from which we would never escape” and that the technology is more dangerous than nuclear weapons.

Speaking in a New York Times interview, Musk said that current trends suggest AI could overtake humans by 2025. However, Musk adds “that doesn’t mean that everything goes to hell in five years. It just means that things get unstable or weird.”

If correct, the latest prediction from Musk would mean the so-called technological singularity – when machine intelligence overtakes human – is set to happen much sooner than other experts predict. Ray Kurzweil, a respected futurist, has previously estimated the aforementioned singularity to occur around 2045.

As the founder of Tesla, SpaceX, and Neuralink – three companies which use AI far more than most – Musk isn’t against the technology, but has called for it to be regulated.

Musk also founded OpenAI back in 2015 with the goal of researching and promoting ethical artificial intelligence. Following disagreements with the company’s direction, Musk left OpenAI in 2018.

Back in February, Musk responded to an MIT Technology Review profile of OpenAI saying that it “should be more open” and that all organisations “developing advanced AI should be regulated, including Tesla.”

Last year, OpenAI decided not to release a text generator which it believed to have dangerous implications in a world already struggling with fake news and disinformation campaigns.

Two graduates later recreated and released a similar generator to OpenAI’s, with one saying that it “allows everyone to have an important conversation about security, and researchers to help secure against future potential abuses.”

OpenAI has since provided select researchers access to their powerful text generator. The latest version, GPT-3, has been making headlines in recent weeks for the incredible things it can achieve with limited input.

GPT-3 offers 175 billion parameters compared to GTP-2’s 1.5 billion parameters – which shows the rapid pace of AI advancements. However, Musk’s prediction of the singularity happening within five years perhaps needs to be taken with a pinch of salt.

(Image Credit: Elon Musk by JD Lasica under CC BY 2.0 license)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Musk predicts AI will be superior to humans within five years appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/07/28/musk-predicts-ai-superior-humans-five-years/feed/ 3
Neuralink will share progress on linking human brains with AI next month https://www.artificialintelligence-news.com/2020/07/09/neuralink-share-progress-linking-human-brains-ai/ https://www.artificialintelligence-news.com/2020/07/09/neuralink-share-progress-linking-human-brains-ai/#respond Thu, 09 Jul 2020 16:29:10 +0000 http://artificialintelligence-news.com/?p=9740 Elon Musk’s startup Neuralink says it will share progress next month on the company’s mission to link human brains with AI. Musk made the announcement of the announcement on Twitter: When Musk appeared on Joe Rogan’s podcast in September 2018, the CEO told Rogan that Neuralink’s long-term goal is to enable human brains to be... Read more »

The post Neuralink will share progress on linking human brains with AI next month appeared first on AI News.

]]>
Elon Musk’s startup Neuralink says it will share progress next month on the company’s mission to link human brains with AI.

Musk made the announcement of the announcement on Twitter:

When Musk appeared on Joe Rogan’s podcast in September 2018, the CEO told Rogan that Neuralink’s long-term goal is to enable human brains to be “symbiotic with AI”, adding that the company would have “something interesting to announce in a few months, that’s at least an order of magnitude better than anything else; probably better than anyone thinks is possible”.

Neuralink held an event in San Francisco in July last year, during simpler times, where the company said it aims to insert electrodes into the brains of monkeys and humans to enable them to control computers.

“Threads” which are covered in the electrodes are implanted in the brain near the neurons and synapses by a robot surgeon. These threads record the information being transmitted onto a sensor called the N1.

That, of course, is a very simplified version of a rather complex task. During the event, Neuralink demonstrated that the company’s tech had already been successfully inserted into the brain of a rat and was able to record the information being transmitted by its neurons.

At the time, Musk said he wanted Neuralink to start human trials this year.

Neuralink went quiet since the event last year, with its Twitter account not making a single tweet since then. Now, it seems, the company is ready to share some notable progress.

(Photo by Robina Weermeijer on Unsplash)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Neuralink will share progress on linking human brains with AI next month appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/07/09/neuralink-share-progress-linking-human-brains-ai/feed/ 0
Elon Musk wants more stringent AI regulation, including for Tesla https://www.artificialintelligence-news.com/2020/02/19/elon-musk-stringent-ai-regulation-tesla/ https://www.artificialintelligence-news.com/2020/02/19/elon-musk-stringent-ai-regulation-tesla/#respond Wed, 19 Feb 2020 13:28:24 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6419 Elon Musk has once again called for more stringent regulations around the development of AI technologies. The founder of Tesla and SpaceX has been one of the most vocal prominent figures in expressing concerns about AI – going as far as to call it humanity’s “biggest existential threat” if left unchecked. Of course, given the... Read more »

The post Elon Musk wants more stringent AI regulation, including for Tesla appeared first on AI News.

]]>
Elon Musk has once again called for more stringent regulations around the development of AI technologies.

The founder of Tesla and SpaceX has been one of the most vocal prominent figures in expressing concerns about AI – going as far as to call it humanity’s “biggest existential threat” if left unchecked.

Of course, given the nature of the companies Musk has founded, he is also well aware of AI’s potential.

Back in 2015, Musk co-founded OpenAI – an organisation founded with the aim of pursuing and promoting ethical AI development. Musk ended up leaving OpenAI in February last year over disagreements with the company’s work.

Earlier this week, Musk said that OpenAI should be more transparent and specifically said his confidence is “not high” in former Google engineer Dario Amodei when it comes to safety.

Responding to a piece by MIT Technology Review about OpenAI, Musk tweeted: “All orgs developing advanced AI should be regulated, including Tesla.”

In response to a further question of whether such regulations should be via individual governments or global institutions like the UN, Musk said he believes both.

Musk’s tweet generated some feedback from other prominent industry figures, including legendary Id Software founder John Carmack who recently stepped back from video game development to focus on independent AI research.

Carmack asked Musk: “How would you imagine that working for someone like me? Cloud vendors refuse to spawn larger clusters without a government approval? I would not be supportive.”

Coder Pranay Pathole shared a similar scepticism to Musk’s call as Carmack, saying: “Large companies ask for regulations acting all virtuous. What they are really doing is creating barriers for entry for new competition because only they can afford to comply with the new regulations.”

The debate over the extent of AI regulations and how they should be implemented will likely go on for some time – we can only hope to get them right before a disaster occurs. If you want to help Musk in building AI, he’s hosting a “super fun” hackathon at his place.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Elon Musk wants more stringent AI regulation, including for Tesla appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/02/19/elon-musk-stringent-ai-regulation-tesla/feed/ 0
Elon Musk is hosting a ‘super fun’ AI hackathon at his house https://www.artificialintelligence-news.com/2020/02/04/elon-musk-hosting-fun-ai-hackathon-house/ https://www.artificialintelligence-news.com/2020/02/04/elon-musk-hosting-fun-ai-hackathon-house/#respond Tue, 04 Feb 2020 17:07:43 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6398 Fresh from kicking off his EDM career, Elon Musk has announced Tesla will be hosting a “super fun” AI hackathon at his house. In a tweet, Musk wrote: “Tesla will hold a super fun AI party/hackathon at my house with the Tesla AI/autopilot team in about four weeks. Invitations going out soon.” The hackathon will... Read more »

The post Elon Musk is hosting a ‘super fun’ AI hackathon at his house appeared first on AI News.

]]>
Fresh from kicking off his EDM career, Elon Musk has announced Tesla will be hosting a “super fun” AI hackathon at his house.

In a tweet, Musk wrote: “Tesla will hold a super fun AI party/hackathon at my house with the Tesla AI/autopilot team in about four weeks. Invitations going out soon.”

The hackathon will focus on the AI behind Tesla’s problematic AutoPilot feature which has been reported to accelerate erratically. Tesla has denied the claims but the hackathon suggests the company at least wants to make AutoPilot more robust.

Ahead of the hackathon announcement, Musk called for developers to join Tesla’s AI team which, he says, reports directly to him.

Talking up the opportunity, Musk highlighted that Tesla will soon have over a million connected vehicles worldwide. Every Tesla is fitted with the sensors and computing power needed for full self-driving which “is orders of magnitude more than everyone else combined”.

Musk says that an individual’s educational background is irrelevant. However, before you get too excited you can just waltz into Tesla, you will be required to pass a “hardcore coding test”.

Python is used “for rapid iteration” at Tesla to build neural networks before the code is converted into “C++/C/raw metal driver code” for the speed required for such critical tasks as piloting a vehicle.

As part of that need for speed, Tesla is also taking on the challenge of building its own AI chips. Musk says Tesla is seeking world-class chip designers to join the company’s teams based in Palo Alto and Austin.

Musk has been vocal about his fears of AI – calling it a potentially existential threat if left unchecked. However, he is also well aware of its opportunities (if that’s of any surprise given how it’s helping to inflate his and investors’ wallets).

“My actions, not just words, show how critically I view (benign) AI,” Musk wrote.

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Elon Musk is hosting a ‘super fun’ AI hackathon at his house appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2020/02/04/elon-musk-hosting-fun-ai-hackathon-house/feed/ 0
Neil deGrasse Tyson shares Musk’s view that AI is ‘our biggest existential crisis’ https://www.artificialintelligence-news.com/2019/10/04/neil-degrasse-tyson-musk-ai-biggest-existential-crisis/ https://www.artificialintelligence-news.com/2019/10/04/neil-degrasse-tyson-musk-ai-biggest-existential-crisis/#comments Fri, 04 Oct 2019 16:18:41 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=6078 Legendary astrophysicist Neil deGrasse Tyson shares the view of Tesla founder Elon Musk that AI poses mankind’s “biggest existential crisis”. Musk made his now-infamous comment during the South by Southwest tech conference in Austin, Texas last year as part of a call for regulation. Musk warned: “I think that’s the single biggest existential crisis that... Read more »

The post Neil deGrasse Tyson shares Musk’s view that AI is ‘our biggest existential crisis’ appeared first on AI News.

]]>
Legendary astrophysicist Neil deGrasse Tyson shares the view of Tesla founder Elon Musk that AI poses mankind’s “biggest existential crisis”.

Musk made his now-infamous comment during the South by Southwest tech conference in Austin, Texas last year as part of a call for regulation. Musk warned: “I think that’s the single biggest existential crisis that we face and the most pressing one.”

A year later, Neil deGrasse Tyson was asked what he believes to be the biggest threat to mankind during an episode of his StarTalk radio show.

Dr Tyson appeared alongside Josh Clark, host of the “Stuff You Should Know” and “The End of The World” podcasts, who was also asked the same question.

“I would say that AI is probably our biggest existential crisis,” Clark said. “The reason why is because we are putting onto the table right now the pieces for a machine to become super intelligent.”

Clark goes on to explain how we don’t yet know how to fully-define, let alone program, morality and friendliness.

“We make the assumption that if AI became super intelligent that friendliness would be a property of that intelligence. That is not necessarily true.”

Dr Tyson chimed in to say he initially had a different answer to what poses the greatest threat to mankind. “I had a different answer, but I like your answer better than the answer I was going to give,” he said.

“What won me over with your argument was that if you locked AI in a box, it would get out. My gosh, it gets out every time. Before I was thinking, ‘This is America, AI gets out of control, you shoot it’… but that does not work, because AI might be in a box, but it will convince you to let it out.”

Dr Tyson does not say what his previous answer was going to be, but he’s warned in the past about the dangers of huge asteroids impacting the Earth and joined calls for action on climate change.

Earlier this week, AI News reported on comments made by Pope Francis who also warned of the dangers of unregulated AI. Pope Francis believes a failure to properly consider the moral and ethical implications of the technology could risk a ‘regression to a form of barbarism’.

(Image by Thor Nielsen / NTNU under CC BY-SA 2.0 license)

Interested in hearing industry leaders discuss subjects like this? Attend the co-located 5G Expo, IoT Tech Expo, Blockchain Expo, AI & Big Data Expo, and Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London, and Amsterdam.

The post Neil deGrasse Tyson shares Musk’s view that AI is ‘our biggest existential crisis’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2019/10/04/neil-degrasse-tyson-musk-ai-biggest-existential-crisis/feed/ 1
Musk wants to link human brains with machines to ‘stop the AI apocalypse’ https://www.artificialintelligence-news.com/2019/07/18/musk-link-human-brains-machines-stop-ai-apocalypse/ https://www.artificialintelligence-news.com/2019/07/18/musk-link-human-brains-machines-stop-ai-apocalypse/#respond Thu, 18 Jul 2019 13:05:07 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=5841 Elon Musk’s ambitious Neuralink project to link human brains with machines is part of the entrepreneur’s crusade to “stop the AI apocalypse”. Musk has been vocal with his concerns about AI, famously saying we’re “summoning the devil” with its development. The SpaceX and Tesla founder created OpenAI in part due to concerns that AI could... Read more »

The post Musk wants to link human brains with machines to ‘stop the AI apocalypse’ appeared first on AI News.

]]>
Elon Musk’s ambitious Neuralink project to link human brains with machines is part of the entrepreneur’s crusade to “stop the AI apocalypse”.

Musk has been vocal with his concerns about AI, famously saying we’re “summoning the devil” with its development. The SpaceX and Tesla founder created OpenAI in part due to concerns that AI could pose an existential risk if developed carelessly.

In February last year, Musk left OpenAI over disagreements with the company’s development. OpenAI made headlines earlier this year for developing essentially a fake news generator that, for obvious reasons, it deemed too dangerous to release.

Musk is now focusing on his Neuralink startup which aims to build an implant which connects human brains with computers. Neuralink’s chips have been implanted in rats so far with the company aiming for human implants within two years.

There are essentially two means of doing brain-to-machine interfaces: invasive, by directly touching the brain, or non-invasive, using electrodes near the skin. Neuralink is going the former route.

Neuralink’s chip contains an array of up to 96 small polymer threats which each have up to 32 electrodes. The chip is implanted into the brain using a robot and a two-millimetre incision. Musk claims this chip will then be able to connect wirelessly to devices.

The first goal of Neuralink is to enable the control of mobile devices and improve the lives of people with brain damage or similar disabilities. Further down the line, Musk says: “After solving a bunch of brain-related diseases there is the mitigation of the existential threat of AI.”

Neuralink’s first human trials are expected next year, pending FDA approval. Musk says not to hold out hope for complete brain-to-machine interfaces anytime soon.

“It’s not going to be like suddenly Neuralink will have this incredible new interface and take over people’s brains,” he said. “It will take a long time, and you’ll see it coming. Getting FDA approval for implantable devices of any kind is quite difficult and this will be a slow process.”

Interested in hearing industry leaders discuss subjects like this and their use cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London, and Amsterdam to learn more. Co-located with the IoT Tech Expo, Blockchain Expo, and Cyber Security & Cloud Expo.

The post Musk wants to link human brains with machines to ‘stop the AI apocalypse’ appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2019/07/18/musk-link-human-brains-machines-stop-ai-apocalypse/feed/ 0
Nvidia CEO is ‘happy to help’ if Tesla’s AI chip ambitions fail https://www.artificialintelligence-news.com/2018/08/17/nvidia-ceo-help-tesla-ai-chip/ https://www.artificialintelligence-news.com/2018/08/17/nvidia-ceo-help-tesla-ai-chip/#respond Fri, 17 Aug 2018 14:46:20 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3650 Nvidia CEO Jensen Huang has teased his company is ‘happy to help’ if Tesla fails its goal to launch a competitor AI chip. Tesla currently uses Nvidia’s silicon for its vehicles. The company’s CEO, Elon Musk, said earlier this month that he’s a “big fan” of Nvidia but that an in-house AI chip would be... Read more »

The post Nvidia CEO is ‘happy to help’ if Tesla’s AI chip ambitions fail appeared first on AI News.

]]>
Nvidia CEO Jensen Huang has teased his company is ‘happy to help’ if Tesla fails its goal to launch a competitor AI chip.

Tesla currently uses Nvidia’s silicon for its vehicles. The company’s CEO, Elon Musk, said earlier this month that he’s a “big fan” of Nvidia but that an in-house AI chip would be able to outperform those of the leading processor manufacturer.

During a conference call on Thursday, Huang said its customers are “super excited” about  Nvidia’s Xavier technology for autonomous machines. He also notes that it’s currently in production, whereas Tesla’s rival is yet-to-be-seen.

Here’s what Huang had to say during the call:

“With respect to the next generation, it is the case that when we first started working on autonomous vehicles, they needed our help. We used the 3-year-old Pascal GPU for the current generation of Autopilot computers.

It’s very clear now that in order to have a safe Autopilot system, we need a lot more computing horsepower. In order to have safe computing, in order to have safe driving, the algorithms have to be rich. It has to be able to handle corner conditions in a lot of diverse situations.

Every time there are more and more corner conditions or more subtle things that you have to do, or you have to drive more smoothly or be able to take turns more quickly, all of those requirements require greater computing capability. And that’s exactly the reason why we built Xavier. Xavier is in production now. We’re seeing great success and customers are super excited about Xavier.

That’s exactly the reason why we built it. It’s super hard to build Xavier and all the software stack on top of it. If it doesn’t turn out for whatever reason for them [Tesla] you can give me a call and I’d be more than happy to help.”

The conference call was carried out following the release of Nvidia’s fiscal earnings report where the company reported better-than-expected earnings.

“Growth across every platform – AI, Gaming, Professional Visualization, self-driving cars – drove another great quarter,” said Huang. “Fueling our growth is the widening gap between demand for computing across every industry and the limits reached by traditional computing.”

However, due to lower-than-expected revenue guidance, Nvidia stock fell by six percent on Thursday following the earnings report.

What are your thoughts on Huang’s comments? Let us know below.

 Interested in hearing industry leaders discuss subjects like this and sharing their use-cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London and Amsterdam to learn more. Co-located with the  IoT Tech Expo, Blockchain Expo and Cyber Security & Cloud Expo so you can explore the future of enterprise technology in one place.

The post Nvidia CEO is ‘happy to help’ if Tesla’s AI chip ambitions fail appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2018/08/17/nvidia-ceo-help-tesla-ai-chip/feed/ 0
Prowler.io aspires to build AI which makes human-like decisions https://www.artificialintelligence-news.com/2017/09/05/prowler-io-build-ai-human-like-decisions/ https://www.artificialintelligence-news.com/2017/09/05/prowler-io-build-ai-human-like-decisions/#comments Tue, 05 Sep 2017 15:23:02 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=2421 Cambridge-based AI startup Prowler has raised £10 million to help it build an AI which can make human-like decisions. Based on the comments made by Elon Musk and Vladimir Putin in our article yesterday, you’d be forgiven if this raises some concerns. AI like Prowler, however, could be what saves us from destruction. If you... Read more »

The post Prowler.io aspires to build AI which makes human-like decisions appeared first on AI News.

]]>
Cambridge-based AI startup Prowler has raised £10 million to help it build an AI which can make human-like decisions.

Based on the comments made by Elon Musk and Vladimir Putin in our article yesterday, you’d be forgiven if this raises some concerns. AI like Prowler, however, could be what saves us from destruction.

If you missed it, Musk voiced his concern about Putin’s comment that the nation which leads in AI “will become the ruler of the world.”

Some are concerned about an AI arms race and that, without human input, an AI relying solely on logic could decide on things such as launching preemptive strikes as being the most likely to achieve victory or protect its host nation. This decision would be devoid of empathy for the human, environmental, political, and long-term destabilising catastrophes this would create.

Mark Cuban, a billionaire and possible U.S. president runner in 2020, shares a concern about killer AI:

Cuban, however, points out it’s AIs not taking on human-like qualities which pose the biggest threat:

Most AI development today focuses on deep neural networks using vast amounts of data. This approach is vital for problem-solving as it is very effective, but for decision-making, Prowler argues it is limiting.

Building a decision-making AI

Prowler is building a decision-making AI which is based on probabilistic modelling, reinforcement learning, and game theory. For each of those areas, Prowler has experts in their field combining their knowledge to create an AI it hopes will make decisions as well as a human.

The company was co-founded by two alums from AI company VocalIQ, which was acquired 13 months after launch by Apple. Today, Cambridge Innovation Capital (CIC) announced it led a £10 million Series A funding round for Prowler.

“Prowler has assembled a world-class team of researchers to tackle some of the most intractable problems of our age,” comments Andrew Williamson, Investment Director at CIC, who is joining the Board of Prowler. “It is hugely exciting that the company is able to capitalise on the expertise in probabilistic modelling, principled machine learning and game theory available in Cambridge.”

“This investment allows us to expand our world-leading team of academics and developers, enhancing our research bandwidth and accelerating our technology into the market,” added Vishal Chatrath, CEO of Prowler. “As a team, we will use the funding to take the business to the next stage and we will continue to solve some of the world’s hardest machine learning problems.”

Despite popular belief, the closer we get to AI taking on human-like qualities could save us all. At the least, it will be far more efficient.

What are your thoughts about AI making human-like decisions? Let us know in the comments.

 Interested in hearing industry leaders discuss subjects like this and sharing their use-cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London and Amsterdam to learn more. Co-located with the  IoT Tech Expo, Blockchain Expo and Cyber Security & Cloud Expo so you can explore the future of enterprise technology in one place.

The post Prowler.io aspires to build AI which makes human-like decisions appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2017/09/05/prowler-io-build-ai-human-like-decisions/feed/ 1