Major automaker Volkswagen is planning to integrate AI features into its compact car models, starting in the European market.
Automaker Volkswagen has announced Artificial Intelligence (AI) will soon be integrated into several of its vehicle models later this year.
During the ongoing tech event International CES in Las Vegas, the carmaker
unveiled plans to combine ChatGPT with the existing IDA voice assistant offered in several of its compact vehicle models.
ChatGPT is a chatbot, a computer program that simulates and processes human conversation, both written and spoken.
According to
executives from Volkswagen and software company Cerence, who partnered with Volkswagen on the technology, cars with the new AI capabilities will be able to perform a variety of functions hands-free thanks to the inclusion of the chatbot.
Activities such as raising the internal temperature, controlling navigation, and entertainment in the car can all be done via voice command using AI.
At this stage, ChatGPT functionality for Volkswagen cars is expected to launch in European markets in the second quarter of 2024. Approval for the United States market is pending, and a timeline for when American drivers will be able to use AI capabilities in their Volkswagen vehicles isn’t known at this stage.
Volkswagen claims it is the first volume manufacturer to make AI tech a standard feature in its compact segment cars. However, many of its competitors are experimenting with AI as well.
General Motors revealed in March last year it was working on a virtual personal assistant using the same AI models behind ChatGPT. While Mercedes-Benz ran a test program last June allowing vehicles that used the automaker’s MBUX system to download ChatGPT. Most of the features offered by Mercedes-Benz were similar to what Volkswagen has just announced for its cars.
AI Regulations Still A Work in Progress
Concerns around reckless AI development have been growing in the last few years, which were only accelerated in the aftermath of ChatGPT becoming available for public use on Nov. 30, 2022.Terrorists and other bad actors using the technology for ill intent have been flagged as a considerable concern, and so, too, has the potential for companies to use it to infringe on customers’ rights. Consumer advocacy groups and even tech moguls like Elon Musk have been
pushing for governments to develop regulations for generative AI tech sooner rather than later.
The U.S. has already taken steps to regulate AI development and prevent some of the predicted worst-case scenarios from becoming a reality. President Joe Biden issued an
executive order creating new standards for AI safety and security, with the goal of
protecting Americans’ privacy and civil rights. Senators from both major parties also united in November last year to introduce an AI bill, directing federal agencies to create standards providing transparency and accountability for AI tools.
Europe has also taken a few steps on its path to AI regulation with the European Union’s Artificial Intelligence Act (EU AI Act), which aims to set clear rules for using and developing AI.
One of the critical parts of the act is to classify AI developments into four different risk categories depending on their use cases: unacceptable risk, high risk, limited risk, and minimal risk.