San Francisco, Jun 16 (IANS): Apple is continuing to refine a system for sharing information and context between devices with Siri, allowing for more nuanced and location-aware voice commands.
The tech giant was granted a patent for "Digital assistant hardware abstraction," which relates to intelligent context sharing and task performance across a group of devices with intelligent assistant capabilities.
The goal is to create the appearance that a single digital assistant is performing a task, instead of multiple digital assistants on different devices, reports AppleInsider.
Apple then defines a few examples in the patent, focusing on commands given to groups of devices.
The system relies on aggregate context and data shared between all devices with an intelligent digital assistant and then parses what to do with the command based on that data.
For example, in one situation, the system could differentiate the same command based on when a specific device is triggered and then comparing that time frame to the same command received by other devices. In these cases, Apple would forego "further processing of the user voice input."
Some of this technology is already used in Apple's HomeKit and Siri systems.
For example, if a user gives a "Hey Siri" command in the presence of a HomePod, iPhone, and Apple Watch, only one of those devices will actually receive and carry out the command.
The patent gives a slew of details on various Apple devices and explains how the system could determine which device to use in a given situation.
For example, a context-collector system could analyse a device score based on the connectivity strength between a device and its network. One portion describes a system to determine the battery life of a device.