api Archives - AI News https://www.artificialintelligence-news.com/tag/api/ Artificial Intelligence News Wed, 08 Feb 2023 16:57:56 +0000 en-GB hourly 1 https://www.artificialintelligence-news.com/wp-content/uploads/sites/9/2020/09/ai-icon-60x60.png api Archives - AI News https://www.artificialintelligence-news.com/tag/api/ 32 32 Google unveils AI enhancements to Search and Maps https://www.artificialintelligence-news.com/2023/02/08/google-unveils-ai-enhancements-search-and-maps/ https://www.artificialintelligence-news.com/2023/02/08/google-unveils-ai-enhancements-search-and-maps/#respond Wed, 08 Feb 2023 16:57:55 +0000 https://www.artificialintelligence-news.com/?p=12721 Google used an event in Paris to unveil some of the latest AI advancements to its Search and Maps products. The last-minute event was largely seen as a response to Microsoft’s integration of OpenAI’s models into its products. Just yesterday, Microsoft held an even more impromptu event where it announced that a new version of... Read more »

The post Google unveils AI enhancements to Search and Maps appeared first on AI News.

]]>
Google used an event in Paris to unveil some of the latest AI advancements to its Search and Maps products.

The last-minute event was largely seen as a response to Microsoft’s integration of OpenAI’s models into its products. Just yesterday, Microsoft held an even more impromptu event where it announced that a new version of OpenAI’s ChatGPT chatbot – based on GPT-4 – will be integrated into the Edge browser and Bing search engine.

Google was expected to make a large number of AI announcements at its I/O developer conference in May. The event this week felt like a rushed and unpolished attempt by Google to remind the world (or, more likely, investors) that it’s also an AI leader and hasn’t been left behind.

OpenAI reportedly set off alarm bells at Google with ChatGPT. At the invite of Google CEO Sundar Pichai, the company’s founders – Larry Page and Sergey Brin – returned for a series of meetings to review Google’s AI product strategy.

In the wake of those meetings, it was allegedly decided that Google will speed up its AI review process so it can deploy solutions more quickly. Amid those reports, and Google’s firing of high-profile ethics researchers, many are concerned that the company will rush unsafe products to market.

Prabhakar Raghavan, SVP at Google, led proceedings. In his opening remarks, he stated that Google’s goal is to “significantly improve the lives of as many people as possible”. Throughout the event, various speakers appeared to really want to push the narrative that Google won’t take risks.

“When it comes to AI, it’s critical that we bring models to the world responsibly,” said Raghavan.

Google Search

Search is Google’s bread-and-butter. The threat that a ChatGPT-enhanced Bing could pose to Google appears to have been what caused such alarm within the company.

“Search is still our biggest moonshot,” said Raghavan. Adding, “the moon keeps moving.”

Google used this section to highlight some of the advancements it’s been making in the background that most won’t be aware of. This has included the use of zero-shot machine translation to add two dozen new languages to Google Translate over the past year.

Another product that continues to be enhanced by AI is Google Lens, which is now used more than 10 billion times per month.

“The camera is the next keyboard,” explains Raghavan. “The age of visual search is here.”

Liz Reid, VP of Engineering at Google, took the stage to provide an update on what the company is doing in this area.

Google Lens is being expanded to support video content. A user can activate Lens, touch something they want to learn more about in a video clip (such as a landmark), and Google will bring up more information about it.

“If you can see it, you can search it,” says Reid.

Multi-search is another impressive visual search enhancement that Google showed off. The feature allows users to search with both an image and text so, for example, you could try and find a specific chair or item of clothing in a different colour.

Google was going to give a live demo of multi-search but awkwardly lost the phone. Fortunately, the company says that it’s now live globally so you can give it a go yourself.

Few companies have access to the amount of information about the world and its citizens that Google does. Privacy arguments aside, it enables the company to offer powerful services that complement one another.

Reid says that users will be able to take a photo of something like a bakery item and ask Google to source a nearby place from Google Maps where the person can get their hands on an equivalent. Google says that feature is coming soon to images on mobile search results pages.

Bard

Prabhakar retook the stage to discuss Google’s response to ChatGPT.

Google’s conversational AI service is called Bard and it’s powered by LaMDA (Language Model for Dialogue Applications).

LaMDA is a model that’s built on Transformer, a neural network architecture that Google Research invented and open-sourced in 2017. Instead of relying on pre-defined responses like older chatbots, LaMDA is trained on dialogue for more open-ended natural interactions and can deliver up-to-date information from the web.

In an example of an interaction, Prabhakar asked Bard what he should consider when buying a new car. He then asked for the pros and cons of an electric car. Finally, he asked Bard to help him plan a road trip.

Bard is now available to trusted testers but Prabhakar says that Google is going to check it meets the company’s “high bar” for safety before a broader rollout.

The company says that it’s embracing NORA (No One Right Answer) for questions like, “What is the best constellation to look for when stargazing?” as it’s subjective. Generative AI will be used in such instances to bring multiple viewpoints to results—which sounds quite similar to what it’s been doing in Google News for some time to help address bias concerns.

Prabhakar goes on to highlight the potential for generative AI goes far beyond text. The SVP highlights that Google can use generative AI to create a 360-degree view of items like sneakers from just a handful of images.

Next month, Google will begin onboarding developers for its Generative Language API to help them access some powerful capabilities. Initially, the API will be powered by LaMDA. Prabhakar says that “a range of models” will follow.

Google Maps

Chris Phillips, Head of Google’s Geo Group, took to the stage to give an overview of some of the AI enhancements the company is bringing to Google Maps.

Phillips says that AI is “powering the next-generation of Google Maps”. Google is using AI to fuse billions of Street View and real-world images to evolve 2D maps into “multi-dimensional views” that will enable users to virtually soar over buildings if they’re planning a visit.

However, most impressive is how AI is enabling Google to take 2D images of indoor locations and turn them into 3D that people can explore. One provided example of where this could be useful is checking out a restaurant ahead of a date to see whether the lighting and general ambience is romantic:

Additional enhancements are being made to ‘Search with Live View’ which uses AR to help people find things nearby like ATMs.

When searching for things like coffee shops, you can see if they’re open and even how busy they typically are all from the AR view.

Google says that it’s making its largest expansion of indoor live view today. Indoor live view is expanding to 1000 new airports, train stations, and shopping centres.

Finally, Google is helping users make more sustainable transport choices. Phillips says that Google wants to “make the sustainable choice, the easy choice”.

New Google Maps features for electric vehicle owners will help with trip planning by factoring in traffic, charge level, and energy consumption. Charging stop recommendations will be improved and a “Very fast” charging filter will help EV owners pick somewhere they can get topped up quickly and be on their way.

Even more sustainable than EV driving is walking. Google is making walking directions more “glanceable” from your route overview. The company says that it’s rolling out globally on Android and iOS over the coming months.

Prabhakar retakes the stage to highlight that Google is “25 years into search” but teases that in some ways is “only just beginning.” He goes on to say that more is in the works and the “best is yet to come.”

Google I/O 2023 just got that much more exciting.

(Photo by Mitchell Luo on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

The post Google unveils AI enhancements to Search and Maps appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2023/02/08/google-unveils-ai-enhancements-search-and-maps/feed/ 0
OpenAI now allows developers to customise GPT-3 models https://www.artificialintelligence-news.com/2021/12/15/openai-now-allows-developers-to-customise-gpt-3-models/ https://www.artificialintelligence-news.com/2021/12/15/openai-now-allows-developers-to-customise-gpt-3-models/#respond Wed, 15 Dec 2021 11:44:42 +0000 https://artificialintelligence-news.com/?p=11507 OpenAI is making it easy for developers to “fine-tune” GPT-3, enabling custom models for their applications. The company says that existing datasets of “virtually any shape and size” can be used for custom models. A single command in the OpenAI command-line tool, alongside a user-provided file, is all that it takes to begin training. The... Read more »

The post OpenAI now allows developers to customise GPT-3 models appeared first on AI News.

]]>
OpenAI is making it easy for developers to “fine-tune” GPT-3, enabling custom models for their applications.

The company says that existing datasets of “virtually any shape and size” can be used for custom models.

A single command in the OpenAI command-line tool, alongside a user-provided file, is all that it takes to begin training. The custom GPT-3 model will then be available for use in OpenAI’s API immediately.

One customer says that it was able to increase correct outputs from 83 percent to 95 percent through fine-tuning. Another client reduced error rates by 50 percent.

Andreas Stuhlmüller, Co-Founder of Elicit, said:

“Since we started integrating fine-tuning into Elicit, for tasks with 500+ training examples, we’ve found that fine-tuning usually results in better speed and quality at a lower cost than few-shot learning.

This has been essential for making Elicit responsive at the same time as increasing its accuracy at summarising complex research statements.

As far as we can tell, this wouldn’t have been doable without fine-tuning GPT-3”

Joel Hellermark, CEO of Sana Labs, commented:

“With OpenAI’s customised models, fine-tuned on our data, Sana’s question and content generation went from grammatically correct but general responses to highly accurate semantic outputs which are relevant to the key learnings.

This yielded a 60 percent improvement when compared to non-custom models, enabling fundamentally more personalised and effective experiences for our learners.”

In June, Gartner said that 80 percent of technology products and services will be built by those who are not technology professionals by 2024. OpenAI is enabling custom AI models to be easily created to unlock the full potential of such products and services.

Related: OpenAI removes GPT-3 API waitlist and opens applications for all developers

(Photo by Sigmund on Unsplash)

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo. The next events in the series will be held in Santa Clara on 11-12 May 2022, Amsterdam on 20-21 September 2022, and London on 1-2 December 2022.

The post OpenAI now allows developers to customise GPT-3 models appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/12/15/openai-now-allows-developers-to-customise-gpt-3-models/feed/ 0
OpenAI removes GPT-3 API waitlist and opens applications for all developers https://www.artificialintelligence-news.com/2021/11/18/openai-removes-gpt-3-api-waitlist-now-generally-available/ https://www.artificialintelligence-news.com/2021/11/18/openai-removes-gpt-3-api-waitlist-now-generally-available/#respond Thu, 18 Nov 2021 16:18:27 +0000 https://artificialintelligence-news.com/?p=11397 OpenAI has removed the waitlist to access its GPT-3 API which means any developer can apply to get started. The AI giant unveiled GPT-3 in May last year to a mixed reception. Few doubted GPT-3’s impressive ability to generate text similar to a human writer, but many expressed concerns about the societal impact. Fake news... Read more »

The post OpenAI removes GPT-3 API waitlist and opens applications for all developers appeared first on AI News.

]]>
OpenAI has removed the waitlist to access its GPT-3 API which means any developer can apply to get started.

The AI giant unveiled GPT-3 in May last year to a mixed reception. Few doubted GPT-3’s impressive ability to generate text similar to a human writer, but many expressed concerns about the societal impact.

Fake news and propaganda are already difficult to counter even when it’s being generated in relatively limited amounts by human writers. The ability for anyone to use an AI to generate misinformation at scale could have serious implications.

A paper (PDF) from the Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism found that GPT-3 is able to generate “influential” text that has the potential to radicalise people into far-right extremist ideologies.

OpenAI itself shared those concerns and decided against releasing GPT-3 to the public at the time. Instead, only select trusted researchers and developers were given access.

The company gradually provided access to GPT-3 to more developers through a waitlist. OpenAI says “tens of thousands” of developers are already taking advantage of powerful AI models through its platform.

However, OpenAI has also been building a number of “safeguards” that have made the company feel comfortable removing the waitlist.

“Instruct” models are designed to adhere better to human instructions, provide specialised endpoints for more truthful question-answering, and deliver a free content filter to help developers mitigate abuse.

“To ensure API-backed applications are built responsibly, we provide tools and help developers use best practices so they can bring their applications to production quickly and safely,” wrote OpenAI in a blog post.

“As our systems evolve and we work to improve the capabilities of our safeguards, we expect to continue streamlining the process for developers, refining our usage guidelines, and allowing even more use cases over time.”

OpenAI has improved ‘Playground’ to make it even simpler for researchers to prototype with its models.

The company has also added an example library with dozens of prompts to get developers started. Codex, OpenAI’s new model for translating natural language into code, also makes an appearance.

Developers in supported countries can sign up and get started experimenting with OpenAI’s API right now.

19/11 update: An earlier version of the headline said that API was “generally” available. This has now been updated to clarify that an application process is still in place and that usage will still be reviewed by OpenAI.

(Photo by Dima Pechurin on Unsplash)

Looking to revamp your digital transformation strategy? Learn more about the Digital Transformation Week event taking place in Amsterdam on 23-24 November 2021 and discover key strategies for making your digital efforts a success.

The post OpenAI removes GPT-3 API waitlist and opens applications for all developers appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2021/11/18/openai-removes-gpt-3-api-waitlist-now-generally-available/feed/ 0
Google launches cross-platform ML Kit APIs to simplify AI integration https://www.artificialintelligence-news.com/2018/05/09/google-ml-kit-api-ai-integration/ https://www.artificialintelligence-news.com/2018/05/09/google-ml-kit-api-ai-integration/#respond Wed, 09 May 2018 15:03:39 +0000 https://d3c9z94rlb3c1a.cloudfront.net/?p=3067 Google is going all in on AI going by this year’s I/O conference, and it’s helping developers access some of these capabilities with its ML Kit set of APIs. ML Kit is a new suite of cross-platform APIs from Google enabling app developers to use machine learning for things such as face recognition, text scanning,... Read more »

The post Google launches cross-platform ML Kit APIs to simplify AI integration appeared first on AI News.

]]>
Google is going all in on AI going by this year’s I/O conference, and it’s helping developers access some of these capabilities with its ML Kit set of APIs.

ML Kit is a new suite of cross-platform APIs from Google enabling app developers to use machine learning for things such as face recognition, text scanning, reading barcodes, and even identifying objects and landmarks.

From the ML Kit documentation page:

“We want the entire device experience to be smarter, not just the OS, so we’re bringing the power of Google’s machine learning to app developers with the launch of ML Kit, a new set of cross-platform APIs available through Firebase.

ML Kit offers developers on-device APIs for text recognition, face detection, image labelling and more. So mobile developers building apps like Lose It!, a nutrition tracker, can easily deploy our text recognition model to scan nutritional information and ML Kit’s custom model APIs to automatically classify over 200 different foods with your phone’s camera.”

Many of these abilities can run offline but are more limited than when connected to Google’s cloud. For example, the on-device version of the API could detect a dog is in a photo – but when connected to the internet – it could recognise the specific breed.

Google says any data sent to its cloud is deleted after processing.

ML Kit is simplifying what used to be a complicated process and making AI more accessible. Rather than having to learn how to use complex machine learning libraries such as TensorFlow, retrieve enough data to train a model, and then make it light enough to run on a mobile device… ML Kit enables access to many common features via an API call on Google Firebase.

Developers wanting to get started with ML Kit can find it in the Firebase console.

What are your thoughts on Google’s ML Kit? Let us know in the comments.

 Interested in hearing industry leaders discuss subjects like this and sharing their use-cases? Attend the co-located AI & Big Data Expo events with upcoming shows in Silicon Valley, London and Amsterdam to learn more. Co-located with the  IoT Tech Expo, Blockchain Expo and Cyber Security & Cloud Expo so you can explore the future of enterprise technology in one place.

The post Google launches cross-platform ML Kit APIs to simplify AI integration appeared first on AI News.

]]>
https://www.artificialintelligence-news.com/2018/05/09/google-ml-kit-api-ai-integration/feed/ 0