Associating the visual representation of user interfaces with their internal structures and metadata
Pixel-based methods are emerging as a new and promising way to develop new interaction techniques on top of existing user interfaces. However, in order to maintain platform independence, other available low-level information about GUI widgets, such as accessibility metadata, was neglected intentiona...
Main Authors: | , , |
---|---|
Other Authors: | , |
Format: | Article |
Language: | English |
Published: |
Association for Computing Machinery (ACM),
2019-06-27T16:21:20Z.
|
Subjects: | |
Online Access: | Get fulltext |
Summary: | Pixel-based methods are emerging as a new and promising way to develop new interaction techniques on top of existing user interfaces. However, in order to maintain platform independence, other available low-level information about GUI widgets, such as accessibility metadata, was neglected intentionally. In this paper, we present a hybrid framework, PAX, which associates the visual representation of user interfaces (i.e. the pixels) and their internal hierarchical metadata (i.e. the content, role, and value). We identify challenges to building such a framework. We also develop and evaluate two new algorithms for detecting text at arbitrary places on the screen, and for segmenting a text image into individual word blobs. Finally, we validate our framework in implementations of three applications. We enhance an existing pixel-based system, Sikuli Script, and preserve the readability of its script code at the same time. Further, we create two novel applications, Screen Search and Screen Copy, to demonstrate how PAX can be applied to development of desktop-level interactive systems. National Science Foundation (U.S.) (award number IIS - 0447800) Quanta Computer Incorporated (as part of the TParty project) |
---|