
Photo by Ed Hardie on Unsplash
Microsoft Launches New Copilot Pages And Expands AI Features
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
In a Rush? Here are the Quick Facts!
- Microsoft announced its second wave of Microsoft 365 Copilot with multiple updates and enhancements
- The new tool Copilot Pages will be deployed within the next few weeks
- Copilot will interact with other programs like Excel, PowerPoint, Word, and Outlook
Microsoft announced multiple updates related to its AI agent Copilot yesterday. The new updates, enhancements, and new products are part of what the company called the second wave of Microsoft 365 Copilot.
The tech giant launched a new digital AI tool called Copilot Pages, described as a “dynamic, persistent canvas designed for multiplayer AI collaboration.” Copilot Pages will be linked to Business Chat (BizChat), where all the organization information is analyzed and stored, and powered by the AI assistant Copilot.
Copilot Pages will allow users to have an open canvas ready to assist and that they can use to create new documents, presentations, or share information with other colleagues and contribute simultaneously. It integrates multiple AI features allowing users to link and interact with different applications.
Microsoft also announced that Copilot, previously integrated into Microsoft Teams, is now linked to other applications like Excel—allowing users to process data and combine it with the programming language Python—, and PowerPoint—including a new Narrative builder to draft presentations within minutes and also include the company’s brand image and style with the Brand manager feature.
Microsoft 365 Copilot in Excel with Python, now in preview, allows you to: ➡️ Perform advanced data analysis ➡️ Create sophisticated visualizations Unlock the power of Python with Copilot. #AI #Excel pic.twitter.com/HvhNBipaqX — Microsoft 365 (@Microsoft365) September 16, 2024
Within the next few weeks, Copilot will also be integrated into Word—allowing customers to process and craft content with prompt suggestions, and Outlook—helping users craft emails and analyze inboxes to highlight and consider the most relevant information to optimize time and actions.
Microsoft 365 users will also be able to build AI agents for their businesses considering content in other platforms like SharePoint and other IT data sources.
“This is just the beginning of Wave 2 of Copilot innovation – in the next two months, we’ll be sharing more about how Copilot is supercharging productivity and accelerating business value for every customer,” wrote Jared Spataro, Corporate Vice President, AI at Work in the press release.
Microsoft 365’s new AI updates arrive at the same moment Salesforce also announces new AI features in Slack.

Image from Freepik
ChatGPT To Boost Self-Driving Cars
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
In a Rush? Here are the Quick Facts!
- Engineers found LLMs like ChatGPT can enhance AV driving capabilities.
- LLMs help AVs interpret commands naturally, improving user experience.
- AVs using LLMs were rated more comfortable than traditional models.
Purdue University engineers have reported that autonomous vehicles (AVs) can leverage ChatGPT and other chatbots, powered by artificial intelligence algorithms known as large language models (LLMs), to enhance their driving capabilities.
Their study , to be presented Sept. 25 at the 27th IEEE International Conference on Intelligent Transportation Systems , explores how LLMs help AVs interpret passenger commands more naturally, potentially marking a breakthrough in human-vehicle interaction.
Unlike current AV systems, which require precise inputs, LLMs are trained to interpret human speech in a more flexible, conversational manner.
Dr. Wang, the study’s lead researcher, explains that traditional vehicle interfaces often involve pressing buttons or issuing explicit voice commands. On the other hand, LLMs enable a more intuitive and natural dialogue with passengers.
Although LLMs don’t directly control the vehicle, the researchers explained that LLMs can be used to assist the AV’s existing systems, making the driving experience more personalized and responsive to passenger needs.
For their experiment, the research team trained ChatGPT with a variety of commands, both direct and indirect. Examples include, “Drive faster” or “I feel motion sick,” teaching the model to adapt to different situations.
The researchers have tested other chatbots, like Google’s Gemini and Meta’s Llama AI, but found that ChatGPT performed the best.
The model processed these commands while taking into account real-time traffic conditions, weather, and data from the vehicle’s sensors.
The vehicle, which operated at level four autonomy (just one step below fully autonomous), used LLM-generated instructions to control its throttle, brakes, gears, and steering.
In some experiments, Wang’s team tested a memory module they added to the system. This allowed the large language models to store information about the passenger’s past preferences. The models then used that data to personalize their responses to future commands.
Experiments were conducted in a controlled environment, including a former airport runway in Columbus, Indiana, where the AV’s responses to commands were tested at highway speeds and intersections.
The researchers reported that participants found their rides in the LLM-assisted AV more comfortable than in traditional AV systems. The vehicle also consistently outperformed baseline safety standards, even when responding to new commands.
This is especially relevant as self-driving cars are increasingly used as taxis , where personalized experiences may enhance passenger satisfaction.
The large language models used in this study took an average of 1.6 seconds to process a passenger’s command, which is fine for most situations but needs to be faster for emergencies, as noted by Dr. Wang.
While this study didn’t focus on it, large language models like ChatGPT can sometimes “hallucinate,” meaning they misinterpret information and give incorrect responses.
To address this, the team set up safety measures to protect passengers when the models misunderstood commands. The models got better at understanding commands during the ride, but hallucinations still need to be fixed before these models can be used in AVs.
Car manufacturers will also need to run more tests beyond the research already done by universities. In addition, they would need regulatory approval before large language models could be fully integrated into AVs to control the vehicle’s driving functions, said Wang.