Skip to main content

Perplexity to introduce sneaky ads alongside its AI answers

Someone holding an iPhone 14 Pro, with Perplexity AI running on it.
Step aside, Apple Intelligence. Gemini might have you beat. Joe Maring / Digital Trends

It was only a matter of time. “Answer engine” startup Perplexity AI announced on Wednesday that it will begin experimenting with inserting advertisements into its chatbot responses starting next week.

Rather than a standard ad you might be familiar with, however, the platform will instead start showing ads to users in the U.S. in the form of “sponsored follow-up questions and paid media positioned to the side of an answer,” from the company’s advertising partners. Those include Indeed, Whole Foods, Universal McCann, and PMG.

examples of perplexity's advertising idea
Perplexity AI

“Ad programs like this help us generate revenue to share with our publisher partners,” the company wrote in a Wednesday blog post. “Experience has taught us that subscriptions alone do not generate enough revenue to create a sustainable revenue-sharing program … advertising is the best way to ensure a steady and scalable revenue stream.”

Recommended Videos

The startup is quick to point out that all sponsored answers will be clearly labeled as such and that the answers themselves will still be generated by its model, not written or edited by the partner companies themselves.

“We intentionally chose these formats because it integrates advertising in a way that still protects the utility, accuracy and objectivity of answers,” the company wrote. “These ads will not change our commitment to maintaining a trusted service that provides you with direct, unbiased answers to your questions.”

Perplexity’s experimentation comes as the company faces increased competition from OpenAI, which recently released its similar SearchGPT feature, as well as multiple lawsuits over allegations that the company’s data scraping practices amount to copyright infringement on a “massive scale.”

Perplexity has also been served cease-and-desist letters from both The New York Times and Conde Nast over its behavior. Whether advertisers will be willing to overlook those glaring issues remains to be seen. If not, then Perplexity may be limited to its sole existing income source: its $20/month Perplexity Pro subscription.

Andrew Tarantola
Former Digital Trends Contributor
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
Opera One puts an AI in control of browser tabs, and it’s pretty smart
AI tab manager in Opera One browser.

Opera One browser has lately won a lot of plaudits for its slick implementation of useful AI features, a clean design, and a healthy bunch of chat integrations. Now, it is putting AI in command of your browser tabs, and in a good way.
The new feature is called AI Tab Commands, and it essentially allows users to handle their tabs using natural language commands. All you need to do is summon the onboard Aria AI assistant, and it will handle the rest like an obedient AI butler.
The overarching idea is to let the AI handle multiple tabs, and not just one. For example, you can ask it to “group all Wikipedia tabs together,” “close all the Smithsonian tabs,” “or shut down the inactive tabs.”

A meaningful AI for web browsing
Handling tabs is a chore in any web browser, and if internet research is part of your daily job, you know the drill. Having to manually move around tabs using a mix of cursor and keyboard shorcuts, naming them, and checking through the entire list of tabs is a tedious task.
Meet Opera Tab Commands: manage your tabs with simple prompts
Deploying an AI do it locally — and using only natural language commands — is a lovely convenience and one of the nicest implementations of AI I’ve seen lately. Interestingly, Opera is also working on a futuristic AI agent that will get browser-based work done using only text prompts.
Coming back to the AI-driven tab management, the entire process unfolds locally, and no data is sent to servers, which is a neat assurance. “When using Tab Commands and asking Aria to e.g. organize their tabs, the AI only sends to the server the prompt a user provides (e.g., “close all my YouTube tabs”) – nothing else,” says the company.
To summon the AI Tab manager, users can hit the Ctrl + slash(/) shortcut, or the Command + Slash combo for macOS. It can also be invoked with a right-click on the tabs, as long as there are five or more currently running in a window.
https://x.com/opera/status/1904822529254183166?s=61
Aside from closing or grouping tabs, the AI Tab Commands can also be used to pin tabs. It can also accept exception commands, such as “close all tabs except the YouTube tabs.” Notably, this feature is also making its way to Opera Air and the gaming-focused Opera GX browser, as well.
Talking about grouping together related tabs, Opera has a neat system called tab islands, instead of color-coded tab groups at the top, as is the case with Chrome or Safari. Opera’s implementation looks better and works really well.
Notably, the AI Tab Commands window also comes with an undo shortcut, for scenarios where you want to revert the actions, like reviving a bunch of closed tabs. Opera One is now available to download on Windows and macOS devices. Opera also offers Air, a browser than puts some zen into your daily workflow.

Read more
Microsoft 365 Copilot gets an AI Researcher that everyone will love
Researcher agent in action inside Microsoft 365 Copilot app.

Microsoft is late to the party, but it is finally bringing a deep research tool of its own to the Microsoft 365 Copilot platform across the web, mobile, and desktop. Unlike competitors such as Google Gemini, Perplexity, or OpenAI’s ChatGPT, all of which use the Deep Research name, Microsoft is going with the Researcher agent branding.
The overarching idea, however, isn’t too different. You tell the Copilot AI to come up with thoroughly researched material on a certain topic or create an action plan, and it will oblige by producing a detailed document that would otherwise take hours of human research and compilation. It’s all about performing complex, multi-step research on your behalf as an autonomous AI agent.
Just to avoid any confusion early on, Microsoft 365 Copilot is essentially the rebranded version of the erstwhile Microsoft 365 (Office) app. It is different from the standalone Copilot app, which is more like a general purpose AI chatbot application.
Researcher: A reasoning agent in Microsoft 365 Copilot
How Researcher agent works?
Underneath the Researcher agent, however, is OpenAI’s Deep Research model. But this is not a simple rip-off. Instead, the feature’s implementation in Microsoft 365 Copilot runs far deeper than the competition. That’s primarily because it can look at your own material, or a business’ internal data, as well.
Instead of pulling information solely from the internet, the Researcher agent can also take a look at internal documents such as emails, chats, internal meeting logs, calendars, transcripts, and shared documents. It can also reference data from external sources such as Salesforce, as well as other custom agents that are in use at a company.
“Researcher’s intelligence to reason and connect the dots leads to magical moments,” claims Microsoft. Researcher agent can be configured by users to reference data from the web, local files, meeting recordings, emails, chats, and sales agent, on an individual basis — all of them, or just a select few.

Why it stands out?

Read more
Samsung might put AI smart glasses on the shelves this year
Google's AR smartglasses translation feature demonstrated.

Samsung’s Project Moohan XR headset has grabbed all the spotlights in the past few months, and rightfully so. It serves as the flagship launch vehicle for a reinvigorated Android XR platform, with plenty of hype from Google’s own quarters.
But it seems Samsung has even more ambitious plans in place and is reportedly experimenting with different form factors that go beyond the headset format. According to Korea-based ET News, the company is working on a pair of smart glasses and aims to launch them by the end of the ongoing year.
Currently in development under the codename “HAEAN” (machine-translated name), the smart glasses are reportedly in the final stages of locking the internal hardware and functional capabilities. The wearable device will reportedly come equipped with camera sensors, as well.

What to expect from Samsung’s smart glasses?
The Even G1 smart glasses have optional clip-on gradient shades. Photo by Tracey Truly / Digital Trends
The latest leak doesn’t dig into specifics about the internal hardware, but another report from Samsung’s home market sheds some light on the possibilities. As per Maeil Business Newspaper, the Samsung smart glasses will feature a 12-megapixel camera built atop a Sony IMX681 CMOS image sensor.
It is said to offer a dual-silicon architecture, similar to Apple’s Vision Pro headset. The main processor on Samsung’s smart glasses is touted to be Qualcomm’s Snapdragon AR1 platform, while the secondary processing hub is a chip supplied by NXP.
The onboard camera will open the doors for vision-based capabilities, such as scanning QR codes, gesture recognition, and facial identification. The smart glasses will reportedly tip the scales at 150 grams, while the battery size is claimed to be 155 mAh.

Read more