12/29/2023 0 Comments Annotate definition![]() Our solution consists of two key components: We build on previous work on graph-based semantic representations and adapt them to conversational systems. The Alexa Meaning Representation Language (AMRL) addresses these problems. Distinguishing the two use cases in the training data would require such overspecification of intents and slots that the system would lose the ability to exploit the general form of the sentence. Conversely, Alexa should be able to resolve utterances with similar structures into different domains, as occasion warrants - for instance, “Alexa, order me a cab,” versus “Alexa, order me an Echo Dot.” With the existing, flat annotation scheme, training a machine-learning system to better handle one of these instances will weaken its ability to handle the other. But to realize the goal of seamless conversational interaction, Alexa skills must be able to both interpret more complex linguistic structures and distinguish between competing interpretations of simple ones.įor instance, Alexa should, ideally, be able to handle utterances like “Alexa, find me a restaurant near the Mariners game,” which spans two domains, local businesses and sporting events. And “slot” describes the entities and classes of entities on which the action is to operate, such as “song,” “‘Thriller,’” and “Michael Jackson” in the command “Play ‘Thriller’ by Michael Jackson.”Īlexa’s popularity attests to the success of this relatively simple annotation scheme. ![]() ![]() For instance, the skills ClassicMusic and PopMusic might both fall under the MusicApp domain.) “Intent” describes the action that the skill is being asked to perform, such as PlayTune or ActivateAppliance. (Each domain may have multiple associated skills. “Domain” describes the class of skill that the utterance is meant to invoke, such as MusicApp or HomeAutomation. Traditionally, data used to train Alexa skills has been annotated using the flat semantic representation of “domain”, “intent” and “slot”. This new representation now powers the library of built-in “intents” - actions that skills can perform - available to skill developers to help them bootstrap their natural-language-understanding (NLU) systems. Data annotated in the language should enable Alexa skills to handle much more complex conversational interactions and to process simple interactions more accurately. So far, the techniques used to represent natural language have been fairly simple, so Alexa has been able to handle only relatively simple requests.Īt the Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies ( NAACL-HLT) that begins this weekend, we will present a new, more sophisticated semantic-representation language that we call the Alexa Meaning Representation Language. Developing a new Alexa skill typically means training a machine-learning system with annotated data, and the skill’s ability to “understand” natural-language requests is limited by the expressivity of the semantic representation used to do the annotation.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |