AI Robotics News | Latest Robotics in AI Developments | AI News https://www.artificialintelligence-news.com/categories/ai-robotics/ Artificial Intelligence News Wed, 04 Oct 2023 14:36:13 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png AI Robotics News | Latest Robotics in AI Developments | AI News https://www.artificialintelligence-news.com/categories/ai-robotics/ 32 32 Open X-Embodiment dataset and RT-X model aim to revolutionise robotics https://www.artificialintelligence-news.com/2023/10/04/open-x-embodiment-dataset-rt-x-model-aim-revolutionise-robotics/ https://www.artificialintelligence-news.com/2023/10/04/open-x-embodiment-dataset-rt-x-model-aim-revolutionise-robotics/#respond Wed, 04 Oct 2023 14:36:10 +0000 https://www.artificialintelligence-news.com/?p=13674 In a collaboration between 33 academic labs worldwide, a consortium of researchers has unveiled a revolutionary approach to robotics. Traditionally, robots have excelled in specific tasks but struggled with versatility, requiring individual training for each unique job. However, this limitation might soon be a thing of the past. Open X-Embodiment: The gateway to generalist robots... Read more »

The post Open X-Embodiment dataset and RT-X model aim to revolutionise robotics appeared first on AI News.

]]>
In a collaboration between 33 academic labs worldwide, a consortium of researchers has unveiled a revolutionary approach to robotics.

Traditionally, robots have excelled in specific tasks but struggled with versatility, requiring individual training for each unique job. However, this limitation might soon be a thing of the past.

Open X-Embodiment: The gateway to generalist robots

At the heart of this transformation lies the Open X-Embodiment dataset, a monumental effort pooling data from 22 distinct robot types.

With the contributions of over 20 research institutions, this dataset comprises over 500 skills, encompassing a staggering 150,000 tasks across more than a million episodes.

This treasure trove of diverse robotic demonstrations represents a significant leap towards training a universal robotic model capable of multifaceted tasks.

RT-1-X: A general-purpose robotics model

Accompanying this dataset is RT-1-X, a product of meticulous training on RT-1 – a real-world robotic control model – and RT-2, a vision-language-action model. This fusion resulted in RT-1-X, exhibiting exceptional skills transferability across various robot embodiments.

In rigorous testing across five research labs, RT-1-X outperformed its counterparts by an average of 50 percent.

The success of RT-1-X signifies a paradigm shift, demonstrating that training a single model with diverse, cross-embodiment data dramatically enhances its performance on various robots.

Emergent skills: Leaping into the future

The experimentation did not stop there. Researchers explored emergent skills, delving into uncharted territories of robotic capabilities.

RT-2-X, an advanced version of the vision-language-action model, exhibited remarkable spatial understanding and problem-solving abilities. By incorporating data from different robots, RT-2-X demonstrated an expanded repertoire of tasks, showcasing the potential of shared learning in the robotic realm.

A responsible approach

Crucially, this research emphasises a responsible approach to the advancement of robotics. 

By openly sharing data and models, the global community can collectively elevate the field—transcending individual limitations and fostering an environment of shared knowledge and progress.

The future of robotics lies in mutual learning, where robots teach each other, and researchers learn from one another. The momentous achievement unveiled this week paves the way for a future where robots seamlessly adapt to diverse tasks, heralding a new era of innovation and efficiency.

(Photo by Brett Jordan on Unsplash)

See also: Amazon invests $4B in Anthropic to boost AI capabilities

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Open X-Embodiment dataset and RT-X model aim to revolutionise robotics appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/10/04/open-x-embodiment-dataset-rt-x-model-aim-revolutionise-robotics/feed/ 0
UK commits £13M to cutting-edge AI healthcare research https://www.artificialintelligence-news.com/2023/08/10/uk-commits-13m-cutting-edge-ai-healthcare-research/ https://www.artificialintelligence-news.com/2023/08/10/uk-commits-13m-cutting-edge-ai-healthcare-research/#respond Thu, 10 Aug 2023 14:51:26 +0000 https://www.artificialintelligence-news.com/?p=13457 The UK has announced a £13 million investment in cutting-edge AI research within the healthcare sector. The announcement, made by Technology Secretary Michelle Donelan, marks a major step forward in harnessing the potential of AI in revolutionising healthcare. The investment will empower 22 winning projects across universities and NHS trusts, from Edinburgh to Surrey, to... Read more »

The post UK commits £13M to cutting-edge AI healthcare research appeared first on AI News.

]]>
The UK has announced a £13 million investment in cutting-edge AI research within the healthcare sector.

The announcement, made by Technology Secretary Michelle Donelan, marks a major step forward in harnessing the potential of AI in revolutionising healthcare. The investment will empower 22 winning projects across universities and NHS trusts, from Edinburgh to Surrey, to drive innovation and transform patient care.

Dr Antonio Espingardeiro, IEEE member and software and robotics expert, comments:

“As it becomes more sophisticated, AI can efficiently conduct tasks traditionally undertaken by humans. The potential for the technology within the medical field is huge—it can analyse vast quantities of information and, when coupled with machine learning, search through records and infer patterns or anomalies in data, that would otherwise take decades for humans to analyse.

We are just starting to see the beginning of a new era where machine learning could bring substantial value and transform the traditional role of the doctor. The true capabilities of this technology as an aide to the healthcare sector are yet to be fully realised. In the future, we may even be able to solve of some of the biggest challenges and issues of our time.

One of the standout projects receiving funding is the University College London’s Centre for Interventional and Surgical Sciences. With a grant exceeding £500,000, researchers aim to develop a semi-autonomous surgical robotics platform designed to enhance the removal of brain tumours. This pioneering technology promises to elevate surgical outcomes, minimise complications, and expedite patient recovery times.

“With the increased adoption of AI and robotics, we will soon be able to deliver the scalability that the healthcare sector needs and establish more proactive care delivery,” added Espingardeiro.

University of Sheffield’s project, backed by £463,000, is focused on a crucial aspect of healthcare – chronic nerve pain. Their innovative approach aims to widen and improve treatments for this condition, which affects one in ten adults over 30.

The University of Oxford’s project, bolstered by £640,000, seeks to expedite research into a foundational AI model for clinical risk prediction. By analysing an individual’s existing health conditions, this AI model could accurately forecast the likelihood of future health problems and revolutionise early intervention strategies.

Meanwhile, Heriot-Watt University in Edinburgh has secured £644,000 to develop a groundbreaking system that offers real-time feedback to trainee surgeons practising laparoscopy procedures, also known as keyhole surgeries. This technology promises to enhance the proficiency of aspiring surgeons and elevate the overall quality of healthcare.

Finally, the University of Surrey’s project – backed by £456,000 – will collaborate closely with radiologists to develop AI capable of enhancing mammogram analysis. By streamlining and improving this critical diagnostic process, AI could contribute to earlier cancer detection.

Ayesha Iqbal, IEEE senior member and engineering trainer at the Advanced Manufacturing Training Centre, said:

“The emergence of AI in healthcare has completely reshaped the way we diagnose, treat, and monitor patients.

Applications of AI in healthcare include finding new links between genetic codes, performing robot-assisted surgeries, improving medical imaging methods, automating administrative tasks, personalising treatment options, producing more accurate diagnoses and treatment plans, enhancing preventive care and quality of life, predicting and tracking the spread of infectious diseases, and helping combat epidemics and pandemics.”

With the UK healthcare sector already witnessing AI applications in improving stroke diagnosis, heart attack risk assessment, and more, the £13 million investment is poised to further accelerate transformative healthcare breakthroughs.

Health and Social Care Secretary Steve Barclay commented:

“AI can help the NHS improve outcomes for patients, with breakthroughs leading to earlier diagnosis, more effective treatments, and faster recovery. It’s already being used in the NHS in a number of areas, from improving diagnosis and treatment for stroke patients to identifying those most at risk of a heart attack.

This funding is yet another boost to help the UK lead the way in healthcare research. It comes on top of the £21 million we recently announced for trusts to roll out the latest AI diagnostic tools and £123 million invested in 86 promising tech through our AI in Health and Care Awards.”

However, the announcement was made the same week as NHS waiting lists hit a record high. Prime Minister Rishi Sunak made reducing waiting lists one of his five key priorities for 2023 on which to hold him “to account directly for whether it is delivered.” Hope is being pinned on technologies like AI to help tackle waiting lists.

This pivotal move is accompanied by the nation’s preparations to host the world’s first major international summit on AI safety, underscoring its commitment to responsible AI development.

Scheduled for later this year, the AI safety summit will provide a platform for international stakeholders to collaboratively address AI’s risks and opportunities.

As Europe’s AI leader, and the third-ranking globally behind the USA and China, the UK is well-positioned to lead these discussions and champion the responsible advancement of AI technology.

(Photo by National Cancer Institute on Unsplash)

See also: BSI publishes guidance to boost trust in AI for healthcare

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post UK commits £13M to cutting-edge AI healthcare research appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/08/10/uk-commits-13m-cutting-edge-ai-healthcare-research/feed/ 0
SK Telecom outlines its plans with AI partners https://www.artificialintelligence-news.com/2023/06/20/sk-telecom-outlines-its-plans-with-ai-partners/ https://www.artificialintelligence-news.com/2023/06/20/sk-telecom-outlines-its-plans-with-ai-partners/#respond Tue, 20 Jun 2023 16:37:32 +0000 https://www.artificialintelligence-news.com/?p=13203 SK Telecom (SKT) is taking significant steps to solidify its position in the global AI ecosystem.  The company recently held a meeting at its Silicon Valley headquarters with CEOs from four new AI partners – CMES, MakinaRocks, Scatter Lab, and FriendliAI – to discuss business cooperation and forge a path towards leadership in the AI... Read more »

The post SK Telecom outlines its plans with AI partners appeared first on AI News.

]]>
SK Telecom (SKT) is taking significant steps to solidify its position in the global AI ecosystem. 

The company recently held a meeting at its Silicon Valley headquarters with CEOs from four new AI partners – CMES, MakinaRocks, Scatter Lab, and FriendliAI – to discuss business cooperation and forge a path towards leadership in the AI industry.

SKT has been actively promoting AI transformation through strategic partnerships and collaborations with various AI companies. During MWC 2023, the company announced partnerships with seven AI companies: SAPEON, Bespin Global, Moloco, Konan Technology, Swit, Phantom AI, and Tuat.

During the meeting, SKT’s CEO Ryu Young-sang outlined the company’s AI vision and discussed its business plans with the AI partners. The executives from SKT and its AI partners engaged in in-depth discussions on major global AI trends, the latest technological achievements, ongoing R&D projects, and global business and investment opportunities.

One of the notable discussions took place between SKT and CMES, an AI-powered robotics company.

SKT and CMES exchanged views and ideas on the development of pricing plans for “Robot as a Service (RaaS)” and subscription-based business models for AI-driven RaaS tailored for enterprises.

RaaS is gaining attention as a cost-effective alternative to additional manpower or infrastructure investment for automation. The demand for RaaS is expected to grow rapidly in sectors such as logistics, delivery, construction, and healthcare.

Furthermore, SKT aims to collaborate with Scatter Lab, a renowned AI startup known for its Lee Lu-da chatbot. The company plans to integrate an emotional AI agent into its AI service, ‘A.’

Additionally, SKT discussed strategies for synergy creation with MakinaRocks, a startup specialising in industrial AI solutions, and FriendliAI, a startup that provides a platform for developing generative AI models. By joining forces, the companies aim to establish a leading position in the global AI market.

Ryu Young-sang, CEO of SKT, commented:

“Now with our AI partners on board, we have completed the blueprint for driving new growth in the global market.

We will work together to develop diverse cooperation opportunities in AI, and bring our AI technologies and services to the global market.”

By harnessing the expertise and technologies of its AI partners, SKT is well-positioned to lead the global AI ecosystem and deliver innovative AI solutions to the market.

(Photo by Brett Jordan on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

The post SK Telecom outlines its plans with AI partners appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/06/20/sk-telecom-outlines-its-plans-with-ai-partners/feed/ 0
Tesla’s AI supercomputer tripped the power grid https://www.artificialintelligence-news.com/2022/10/03/tesla-ai-supercomputer-tripped-power-grid/ https://www.artificialintelligence-news.com/2022/10/03/tesla-ai-supercomputer-tripped-power-grid/#respond Mon, 03 Oct 2022 09:40:05 +0000 https://www.artificialintelligence-news.com/?p=12337 Tesla’s purpose-built AI supercomputer ‘Dojo’ is so powerful that it tripped the power grid. Dojo was unveiled at Tesla’s annual AI Day last year but the project was still in its infancy. At AI Day 2022, Tesla unveiled the progress it has made with Dojo over the course of the year. The supercomputer has transitioned... Read more »

The post Tesla’s AI supercomputer tripped the power grid appeared first on AI News.

]]>
Tesla’s purpose-built AI supercomputer ‘Dojo’ is so powerful that it tripped the power grid.

Dojo was unveiled at Tesla’s annual AI Day last year but the project was still in its infancy. At AI Day 2022, Tesla unveiled the progress it has made with Dojo over the course of the year.

The supercomputer has transitioned from just a chip and training tiles into a full cabinet. Tesla claims that it can replace six GPU boxes with a single Dojo tile, which it says is cheaper than one GPU box.

Per tray, there are six Dojo tiles. Tesla claims that each tray is equivalent to “three to four full-loaded supercomputer racks”. Two trays can fit in a single Dojo cabinet with a host assembly.

Such a supercomputer naturally has a large power draw. Dojo requires so much power that it managed to trip the grid in Palo Alto.

“Earlier this year, we started load testing our power and cooling infrastructure. We were able to push it over 2 MW before we tripped our substation and got a call from the city,” said Bill Chang, Tesla’s Principal System Engineer for Dojo.

In order to function, Tesla had to build custom infrastructure for Dojo with its own high-powered cooling and power system.

An ‘ExaPOD’ (consisting of a few Dojo cabinets) has the following specs:

  • 1.1 EFLOP
  • 1.3TB SRAM
  • 13TB DRAM

Seven ExaPODs are currently planned to be housed in Palo Alto.

Dojo is purpose-built for AI and will greatly improve Tesla’s ability to train neural nets using video data from its vehicles. These neural nets will be critical for Tesla’s self-driving efforts and its humanoid robot ‘Optimus’, which also made an appearance during this year’s event.

Optimus

Optimus was also first unveiled last year and was even more in its infancy than Dojo. In fact, all it was at the time was a person in a spandex suit and some PowerPoint slides.

While it’s clear that Optimus still has a long way to go before it can do the shopping and carry out dangerous manual labour tasks, as Tesla envisions, we at least saw a working prototype of the robot at AI Day 2022.

“I do want to set some expectations with respect to our Optimus robot,” said Tesla CEO Elon Musk. “As you know, last year it was just a person in a robot suit. But, we’ve come a long way, and compared to that it’s going to be very impressive.”

Optimus can now walk around and, if attached to apparatus from the ceiling, do some basic tasks like watering plants:

The prototype of Optimus was reportedly developed in the past six months and Tesla is hoping to get a working design within the “next few months… or years”. The price tag is “probably less than $20,000”.

All the details of Optimus are still vague at the moment, but at least there’s more certainty around the Dojo supercomputer.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Tesla’s AI supercomputer tripped the power grid appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/10/03/tesla-ai-supercomputer-tripped-power-grid/feed/ 0
Chess robot breaks child’s finger after premature move https://www.artificialintelligence-news.com/2022/07/25/chess-robot-breaks-childs-finger-after-premature-move/ https://www.artificialintelligence-news.com/2022/07/25/chess-robot-breaks-childs-finger-after-premature-move/#respond Mon, 25 Jul 2022 14:33:03 +0000 https://www.artificialintelligence-news.com/?p=12172 A robot went rogue at a Moscow chess tournament and broke a kid’s finger after he made a move prematurely.  The robot, which uses AI to play three chess games at once, grabbed and pinched the child’s finger. Unfortunately, despite several people rushing to help, the robot broke the kid’s finger: According to Moscow Chess... Read more »

The post Chess robot breaks child’s finger after premature move appeared first on AI News.

]]>
A robot went rogue at a Moscow chess tournament and broke a kid’s finger after he made a move prematurely. 

The robot, which uses AI to play three chess games at once, grabbed and pinched the child’s finger. Unfortunately, despite several people rushing to help, the robot broke the kid’s finger:

According to Moscow Chess Federation VP Sergey Smagin, the robot has been used for 15 years and this is the first time such an incident has occurred.

Reports suggest the robot expects its human rival to leave a set amount of time after it makes its play. The child played too quickly and the robot didn’t know how to handle the situation.

“There are certain safety rules and the child, apparently, violated them. When he made his move, he did not realise he first had to wait,” Smagin said. “This is an extremely rare case, the first I can recall.”

It doesn’t paint Russia’s robotics scene in the best light and it’s quite surprising the story even made it out of the country’s notorious censorship.

Fortunately, the child’s finger has been put in a cast and he is expected to make a quick and complete recovery. There doesn’t appear to be any lasting mental trauma either as he played again the next day.

A study in 2015 found that one person is killed each year by an industrial robot in the US alone. As robots become ever more prevalent in our work and personal lives; that number is likely to increase.

Most injuries and fatalities with robots are from human error, so it’s always worth being cautious.

(Photo by GR Stocks on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Chess robot breaks child’s finger after premature move appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/07/25/chess-robot-breaks-childs-finger-after-premature-move/feed/ 0
IBM’s AI-powered Mayflower ship crosses the Atlantic https://www.artificialintelligence-news.com/2022/06/06/ibm-ai-powered-mayflower-ship-crosses-the-atlantic/ https://www.artificialintelligence-news.com/2022/06/06/ibm-ai-powered-mayflower-ship-crosses-the-atlantic/#respond Mon, 06 Jun 2022 15:13:54 +0000 https://www.artificialintelligence-news.com/?p=12045 A groundbreaking AI-powered ship designed by IBM has successfully crossed the Atlantic, albeit not quite as planned. The Mayflower – named after the ship which carried Pilgrims from Plymouth, UK to Massachusetts, US in 1620 – is a 50-foot crewless vessel that relies on AI and edge computing to navigate the often harsh and unpredictable... Read more »

The post IBM’s AI-powered Mayflower ship crosses the Atlantic appeared first on AI News.

]]>
A groundbreaking AI-powered ship designed by IBM has successfully crossed the Atlantic, albeit not quite as planned.

The Mayflower – named after the ship which carried Pilgrims from Plymouth, UK to Massachusetts, US in 1620 – is a 50-foot crewless vessel that relies on AI and edge computing to navigate the often harsh and unpredictable oceans.

IBM’s Mayflower has been attempting to autonomously complete the voyage that its predecessor did over 400 years ago but has been beset by various problems.

The initial launch was planned for June 2021 but a number of technical glitches forced the vessel to return to Plymouth.

Back in April 2022, the Mayflower set off again. This time, an issue with the generator forced the boat to divert to the Azores Islands in Portugal.

The Mayflower was patched up and pressed on until late May when a problem developed with the charging circuit for the generator’s starter batteries. This time, a course for Halifax, Nova Scotia was charted.

After more than five weeks since it departed Plymouth, the modern Mayflower is now docked in Halifax. While it’s yet to reach its final destination, the Mayflower has successfully crossed the Atlantic (hiccups aside.)

While mechanically the ship leaves a lot to be desired, IBM says the autonomous systems have worked flawlessly—including the AI captain developed by MarineAI.

It’s beyond current AI systems to instruct and control robotics to carry out mechanical repairs for any number of potential failures. However, the fact that Mayflower’s onboard autonomous systems have been able to successfully navigate the ocean and report back mechanical issues is an incredible achievement.

“It will be entirely responsible for its own navigation decisions as it progresses so it has very sophisticated software on it—AIs that we use to recognise the various obstacles and objects in the water, whether that’s other ships, boats, debris, land obstacles, or even marine life,” Robert High, VP and CTO of Edge Computing at IBM, told Edge Computing News in an interview.

IBM designed Mayflower 2.0 with marine research nonprofit Promare. The ship uses a wind/solar hybrid propulsion system and features a range of sensors for scientific research on its journey including acoustic, nutrient, temperature, and water and air samplers.

You can find out more about the Mayflower and view live data and webcams from the ship here.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post IBM’s AI-powered Mayflower ship crosses the Atlantic appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/06/06/ibm-ai-powered-mayflower-ship-crosses-the-atlantic/feed/ 0
Georgia State researchers design artificial vision device for microrobots https://www.artificialintelligence-news.com/2022/04/21/georgia-state-researchers-design-artificial-vision-device-for-microrobots/ https://www.artificialintelligence-news.com/2022/04/21/georgia-state-researchers-design-artificial-vision-device-for-microrobots/#respond Thu, 21 Apr 2022 14:59:50 +0000 https://artificialintelligence-news.com/?p=11902 Researchers at Georgia State University (GSU) have designed an ‘electric eye’ – an artificial vision device – for micro-sized robots. Through using synthetic methods, the device mimics the biochemical processes that allow for vision in the natural world. It improves on previous research in terms of colour recognition, a particularly challenging area due to the... Read more »

The post Georgia State researchers design artificial vision device for microrobots appeared first on AI News.

]]>
Researchers at Georgia State University (GSU) have designed an ‘electric eye’ – an artificial vision device – for micro-sized robots.

Through using synthetic methods, the device mimics the biochemical processes that allow for vision in the natural world.

It improves on previous research in terms of colour recognition, a particularly challenging area due to the difficulty of downscaling colour sensing devices. Conventional colour sensors typically consume a large amount of physical space and offer less accurate colour detection.

This was achieved through a unique vertical stacking architecture that offers a novel approach to how the device is designed. Its van der Waals semi-conductor powers the sensors with precise colour recognition capabilities whilst simplifying the lens system for downscaling.

“The new functionality achieved in our image sensor architecture all depends on the rapid progress of van der Waals semiconductors during recent years,” said one of the researchers.

“Compared with conventional semiconductors, such as silicon, we can precisely control the van der Waals material band structure, thickness, and other critical parameters to sense the red, green, and blue colours.”

ACS Nano, a scientific journal on nanotechnology, published the research. The article itself focused on illustrating the fundamental principles and feasibility behind artificial vision in the new micro-sized image sensor.

Sidong Lei, assistant professor of Physics at GSU and the research lead, said: “More than 80% of information is captured by vision in research, industry, medication, and our daily life. The ultimate purpose of our research is to develop a micro-scale camera for microrobots that can enter narrow spaces that are intangible by current means, and open up new horizons in medical diagnosis, environmental study, manufacturing, archaeology, and more.”

The technology is currently patent pending with Georgia State’s Office of Technology Transfer and Commercialisation.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Georgia State researchers design artificial vision device for microrobots appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/04/21/georgia-state-researchers-design-artificial-vision-device-for-microrobots/feed/ 0
Data analytics’ centrality to F1 racing https://www.artificialintelligence-news.com/2022/01/11/data-analytics-centrality-to-f1-racing/ https://www.artificialintelligence-news.com/2022/01/11/data-analytics-centrality-to-f1-racing/#respond Tue, 11 Jan 2022 13:25:34 +0000 https://artificialintelligence-news.com/?p=11563 To the fan or the casual onlooker, a Formula One race involves drivers, the car, and a pit crew. These are the visible teams that you see at the race. Fans know there is a factory of high-end engineers who craft the cars that do battle on tracks globally, but there is another equally important... Read more »

The post Data analytics’ centrality to F1 racing appeared first on AI News.

]]>
To the fan or the casual onlooker, a Formula One race involves drivers, the car, and a pit crew. These are the visible teams that you see at the race.

Fans know there is a factory of high-end engineers who craft the cars that do battle on tracks globally, but there is another equally important and high-end team that is less visible – data.

The Bahrain Grand Prix, for example, demonstrated the power of data to win a race. In one of the closest and hardest-fought contests, Lewis Hamilton and the Mercedes-AMG team beat hard competition from Red Bull and Max Verstappen, who had the lead from the start lights.

While Hamilton and Mercedes stayed close to Red Bull, thanks to the team’s data-decision strategies, Mercedes were able to execute an undercut, a decision to pit early for fresh tyres and use the extra performance from those tyres to take the lead.

The fresh tyres meant Hamilton was able to lap the circuit up to two seconds faster than Verstappen.

The undercut took place on the first pit stop, and the plan was to put Verstappen under pressure of a second undercut from third-placed Valtteri Bottas in the second Mercedes.

However, a mechanical issue delayed Bottas’ pit stop, and leader Hamilton faced the possibility that Verstappen and Red Bull would counterattack with their own undercut.

Bahrain is a race that has high tyre wear, and since Hamilton pitted early he had to do extra laps on worn tyres. This allowed Verstappen to get close to Hamilton and put the team under immense pressure in the closing laps.

Hamilton won by just half a second, with driver excellence in protecting tyres – combined with the team’s data-decision strategies – carrying the victory.

The timing of pit stops to execute an undercut is just one area where data has changed the race.

For the 2021 season, new rules were introduced related to car aerodynamics. A new aerodynamic package can completely change the characteristics of the car.

Mercedes-AMG uses TIBCO Spotfire to keep track of the car set-ups used by the team across the season, and to unpick that data and map it to new data from testing and simulations.

Together, these data sets provide insights into how the car behaves under the new regulations, and this helps direct car development and race strategies.

Spotfire is a key tool in post-race reviews, allowing the team to analyse race events such as race starts, and develop data sets focused on a track and its conditions, which are invaluable for future races.

Insights the team has gained include braking traction, tyre traction recovery, and throttle usage, all of which are used to understand and tweak ongoing race strategies.

Digital twin to the test

The data collected from each race is used in the build up to the next race. The team developed a digital twin of the car, including mathematical models of the car’s sub-assemblies.

This enables the team to test and analyse millions of car set-up and race scenarios prior to upcoming events, without a real car turning a wheel. Simulations are run for more than 50 set-up parameters in the sub-assemblies, as well as considerations for elements such as the weather conditions and driver preferences.

The digital twin also enables Mercedes-AMG’s team to constantly tweak the car, with different teams of experts working on different modules and sub-assemblies.

Visual analysis capabilities enable the vehicle dynamics teams to share their insights with the track engineers, who can then drill down, filter and run what-if scenarios and trade-offs to identify areas of performance advantage. 

Collaboration is key, and engineers at the factory, or trackside engineers travelling from race to race, share information and prepare for the Grand Prix ahead.

Engineering teams often come together en route to the track, at an airport or even in the plane to look at the simulation data and discover opportunities for performance improvement.

Once at the racetrack, the collaboration continues, and the simulation data feeds into setting the car up for the practice sessions, qualifying and race day.

As the practice sessions unfold, the trackside team responds to changes in weather and incorporates feedback from the drivers.

This same level of analysis is used to optimise the efforts in developing and manufacturing the car.

A new cost cap was recently introduced to the sport by the FIA, the governing body of motor racing. This keeps annual spending at $145 million per team, and while this is a significant amount of money, this budget has to cover the design, engineering, operating, and racing costs for the entire Mercedes-AMG Petronas Formula One organisation.

When the car build and racing costs are taken out of the $145 million, the rest of the development budget is quite constrained.

So, any savings Mercedes-AMG can achieve boosts the development budget available to keep the car at the front of the grid.

The Mercedes-AMG and the TIBCO Data Science teams collaborated to develop Spotfire visual analytics tools that provide up-to-the-minute cost and value information to engineers and business staff.

The Spotfire analysis features a tree map of car components with drill-down to supplementary tabular data that quantify the value and cost of components.

This enables individual teams and engineers to work in parallel, optimising their sub-assembly value and cost.

The TIBCO Data Science team also developed a custom “concertina” visualisation, using ”Spotfire Mods“, to analyse cost and value holistically.

In this visual analysis, all the car sub-assemblies are included and shown at a high-level, with drill down into each fold of the concertina to analyse individual value cost trade-offs within and among sub-assemblies.

These visual analyses have resulted in some quick wins, including selective use of protective coatings on car parts and their surfaces. The use of such coatings was previously widespread, and the teams are now able to trim costs with more judicious use of some higher cost coatings.

Bottom line, at every twist and turn of every race, across the season and at every decision, the drivers, team managers and engineers make data and analytics play a central role.

From car design and manufacturing to car setup, configuration, and race strategy, the Mercedes-AMG team stay at the top of the analytics heap. The past seven consecutive driver and constructor championships are a clear testament to this!

The post Data analytics’ centrality to F1 racing appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2022/01/11/data-analytics-centrality-to-f1-racing/feed/ 0
National Robotarium pioneers AI and telepresence robotic tech for remote health consultations https://www.artificialintelligence-news.com/2021/09/20/national-robotarium-pioneers-ai-and-telepresence-robotic-tech-for-remote-health-consultations/ https://www.artificialintelligence-news.com/2021/09/20/national-robotarium-pioneers-ai-and-telepresence-robotic-tech-for-remote-health-consultations/#respond Mon, 20 Sep 2021 13:45:11 +0000 http://artificialintelligence-news.com/?p=11095 The National Robotarium, hosted by Heriot-Watt University in Edinburgh, has unveiled an AI-powered telepresence robotic solution for remote health consultations. Using the solution, health practitioners would be able to assess a person’s physical and cognitive health from anywhere in the world. Patients could access specialists no matter whether they’re based in the UK, India, the... Read more »

The post National Robotarium pioneers AI and telepresence robotic tech for remote health consultations appeared first on AI News.

]]>
The National Robotarium, hosted by Heriot-Watt University in Edinburgh, has unveiled an AI-powered telepresence robotic solution for remote health consultations.

Using the solution, health practitioners would be able to assess a person’s physical and cognitive health from anywhere in the world. Patients could access specialists no matter whether they’re based in the UK, India, the US, or anywhere else.

Iain Stewart, UK Government Minister for Scotland, said:

“It was fascinating to visit the National Robotarium and see first-hand how virtual teleportation technology could revolutionise healthcare and assisted living.

Backed by £21 million UK Government City Region Deal funding, this cutting-edge research centre is a world leader for robotics and AI, bringing jobs and investment to the area.”

The project is part of the National Robotarium’s assisted living lab which explores how to improve the lives of people living with various conditions.

Dr Mario Parra Rodriguez, an expert in cognitive assessment from the University of Strathclyde, is working on the project and believes the solution will enable more regular monitoring and health assessments that are critical for people living with conditions like Alzheimer’s disease and other cognitive impairments.

“The experience of inhabiting a distant robot through which I can remotely guide, assess, and support vulnerable adults affected by devastating conditions such as Alzheimer’s disease, grants me confidence that challenges we are currently experiencing to mitigate the impact of such diseases will soon be overcome through revolutionary technologies,” commented Rodriguez.

“The collaboration with the National Robotarium, hosted by Heriot-Watt University is combining experience from various disciplines to deliver technologies that can address the ever-changing needs of people affected by dementia.”

Dr Mauro Dragone is leading the research and explains how AI was vital for the project:

“Our prototype makes use of machine learning and artificial intelligence techniques to monitor smart home sensors to detect and analyse daily activities. We are programming the system to use this information to carry out a thorough, non-intrusive assessment of an older person’s cognitive abilities, as well as their ability to live independently.

Combining the system with a telepresence robot brings two major advances: Firstly, robots can be equipped with powerful sensors and can also operate in a semi-autonomous mode, enriching the capability of the system to deliver quality data, 24 hours a day, seven days a week. 

Secondly, telepresence robots keep clinicians and carers in the loop. These professionals can benefit from the data provided by the project’s intelligent sensing system, but they can also control the robot directly, over the Internet, to interact with the individual under their care. They can see through the eyes of the robot, move around the room or between rooms and operate its arms and hands to carry out more complex assessment protocols. They can also respond to emergencies and provide assistance when needed.”

Earlier this month, the UK government announced tax rises to fund social care, give people the dignity they deserve, and help the NHS recover from the pandemic.

However, some believe further rises are on the horizon. Innovative technologies could help to reduce costs while maintaining or improving care.

“Blackwood is always looking for solutions that help our customers to live more independently whilst promoting choice and control for the individual. Robotics has the potential to improve independent living, provide new levels of support, and integrate with our digital housing and care system CleverCogs,” said Mr Colin Foskett, Head of Innovation at Blackwood Homes and Care.

“Our partnership with the National Robotarium and the design of the assisted living lab ensures that our customers are involved in the co-design and co-creation of new products and services, increasing our investment in innovation and in the future leading to new solutions that will aid independent living and improve outcomes for our customers.”

Our sister publication, IoT News, reported on the construction of the £22.4 million National Robotarium earlier this year—including some of the facilities, equipment, and innovative projects that it hosts.

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post National Robotarium pioneers AI and telepresence robotic tech for remote health consultations appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/09/20/national-robotarium-pioneers-ai-and-telepresence-robotic-tech-for-remote-health-consultations/feed/ 0
AI Day: Elon Musk unveils ‘friendly’ humanoid robot Tesla Bot https://www.artificialintelligence-news.com/2021/08/20/ai-day-elon-musk-unveils-friendly-humanoid-robot-tesla-bot/ https://www.artificialintelligence-news.com/2021/08/20/ai-day-elon-musk-unveils-friendly-humanoid-robot-tesla-bot/#respond Fri, 20 Aug 2021 13:23:59 +0000 http://artificialintelligence-news.com/?p=10935 During Tesla’s AI Day event, CEO Elon Musk unveiled a robot that is “intended to be friendly”. Musk has been one of the most prominent figures to warn that AI is a “danger to the public” and potentially the “biggest risk we face as a civilisation”. In 2017, he even said there was just a... Read more »

The post AI Day: Elon Musk unveils ‘friendly’ humanoid robot Tesla Bot appeared first on AI News.

]]>
During Tesla’s AI Day event, CEO Elon Musk unveiled a robot that is “intended to be friendly”.

Musk has been one of the most prominent figures to warn that AI is a “danger to the public” and potentially the “biggest risk we face as a civilisation”. In 2017, he even said there was just a “five to 10 percent chance of success [of making AI safe]”.

Speaking about London-based DeepMind in a New York Times interview last year, Musk said: “Just the nature of the AI that they’re building is one that crushes all humans at all games. I mean, it’s basically the plotline in ‘War Games’”.

Unveiling a 5ft 8in AI-powered humanoid robot may seem to contradict Musk’s concerns. However, rather than leave development to parties who he believes would be less responsible, Musk believes Tesla can lead in building ethical AI and robotics.

Musk has form in this area after co-founding OpenAI. The company’s mission statement is: “To build safe Artificial General Intelligence (AGI), and ensure AGI’s benefits are as widely and evenly distributed as possible.”

Of course, it all feels a little like building nuclear weapons to deter them—it’s an argument that’s sure to have some rather passionate views on either side.

During the unveiling of Tesla Bot, Musk was sure to point out that you could easily outrun and overpower it.

Tesla Bot is designed to “navigate through a world built for humans” and carry out tasks that are dangerous, repetitive, or boring. One example task is for the robot to be told to go to the store and get specific groceries.

Of course, all we’ve seen of Tesla Bot at this point is a series of PowerPoint slides (if you forget about the weird dance by a performer dressed as a Tesla Bot … which we’re all trying our hardest to.)

The unveiling of the robot followed a 90-minute presentation about some of the AI upgrades coming to Tesla’s electric vehicles. Tesla Bot is essentially a robot version of the company’s vehicles.

“Our cars are basically semi-sentient robots on wheels,” Musk said. “It makes sense to put that into humanoid form.”

AI Day was used to hype Tesla’s advancements in a bid to recruit new talent to the company. 

On its recruitment page, Tesla wrote: “Develop the next generation of automation, including a general purpose, bi-pedal, humanoid robot capable of performing tasks that are unsafe, repetitive or boring.

“We’re seeking mechanical, electrical, controls and software engineers to help us leverage our AI expertise beyond our vehicle fleet.”

A prototype of Tesla Bot is expected next year, although Musk has a history of delays and showing products well before they’re ready across his many ventures. Musk says that it’s important the new machine is not “super expensive”.

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

The post AI Day: Elon Musk unveils ‘friendly’ humanoid robot Tesla Bot appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/08/20/ai-day-elon-musk-unveils-friendly-humanoid-robot-tesla-bot/feed/ 0