Thursday, August 30, 2012

Paper Reading #1: See me, see you: a lightweight method for discriminating user touches on tabletop displays


See Me, See You was a concept paper presented at this year's CHI conference. Its main focus was in overcoming the limitations and pitfalls related to incorporating multiple users into large touch-based surfaces. The authors were:


  • Hong Zhang - worked on a paper that focused on evaluating and introducing new one-handed gestures for touch-based phones. It was written in collaboration with Pourang Irani (also an author of this paper) in addition to other authors. Affiliates with the University of Manitoba.
  • Xing-Dong Yang - Has a large history of 13 other papers published that cover a wide variety of different topics and fields of research, though many of them focus on touch-based inputs and improving or including additional functionality to user interaction with touch-based features. Is affiliated with the University of Alberta.
  • Barrett Ens - Has written one other paper based on the concept of "off-screen" pointing, essentially having a touch device tracking finger "pointing" outside of the normal viewport of the device. Like Hong Zhang, Barrett Ens affiliates with the University of Manitoba.
  • Hai-Ning Liang - Has published only this paper and is affiliated with the University of Manitoba as well.
  • Pierre Boulanger - Highly proficient researcher who has collaborated on 57 other papers of highly varied topics.. Associates with the University of Alberta.
  • Pourang Irani - Has published only this paper as well and like most of the other authors, associates with the University of Manitoba.



Summary

On a general level, the project recognizes and attempts to correct the problems surrounding incorporating multiple users into a single, large device. One of the largest issues that the authors took into consideration was the fact that many of the solutions for adding multiple-user functionality in large touch-based screens often involved dedicated additional peripherals that both added to the cost of the devices as well as decreased user enjoyment with added burden of keeping up with these peripherals that had their own "rules" for usage. The basic concept of See Me, See You is essentially implementing the functionality that requires as little extra effort both on the developer/manufacturer side and the user as possible.
The picture on the left depicts the end user functionality with this system in place. As it can be seen, each user has their own "brush" that they each individually employ on the picture itself. None of the individual fingers are "confused" with each other. The paper repeatedly stated that this solution was accurate for both the large and the small distances that users may have.

The way the system works is the following: essentially, the implementation will employ a "learning system" in which the hardware used will be able to "learn" the users and adapt to them with minimal direct user input in this regard. The user will place their finger on the surface and the program will employ cameras and senors to begin a "profile" that contains each individual hand. The program will make a distinction between different hands, but most importantly, it will make a distinction between different finger orientation.


This solution in particular takes into account the angle at which the finger is placed based on the location of the hand's palm. This will make it easy for the program to learn where the user is placed in relation to the other users. Essentially, it will see if an hand with an index finger stretched out to touch the surface is oriented in the front, on the side, or across other users. From this, the user will be able to touch any other place in the screen without worrying about "overlapping" with other users, a phenomenon in which the surface will mistakenly believe that two fingers that are relatively close to each other are actually from the same user. The system will thus be able to accurately recognize where each user is interacting with the screen


The system will apply the scanned "hand" fingers and will attempt to use a predetermined "prediction" chart that will identify how the hand should look like from any user at any point on the surface. It will then be able to make a distinction between which person will be touching that particular area. Each area is divided into smaller "cells" for easier distinction by the system.



Related Works


Papers that I personally found to be of relevance to the material are:
  1. Touch me once and I know it’s you! Implicit Authentication based on Touch Screen Patterns
  2. Interactive Gesture-based Authentication for Tabletop Devices
  3. Multi-Touch Authentication on Tabletops
  4. The IR Ring: Authenticating Users’ Touches on a Multi-Touch Display
  5. Biometric-Rich Gestures: A Novel Approach to Authentication on Multi-touch Devices
  6. Spatial Authentication on Large Interactive Multi-Touch Surfaces
  7. Authenticated Tangible Interaction using RFID and Depth-Sensing Cameras
  8. Performance Enhancement of Large-Size NFC Multi-Touch System
  9. Using Mobile Phones to Spontaneously Authenticate and Interact with Multi-Touch Surfaces
  10. pPen: Enabling Authenticated Pen And Touch Interaction on Tabletop Surfaces

These papers were selected for a variety of reasons, though they all have in common some kind of authentication functionality on multi-touch devices. Some of these papers do not consist of specifically using tabletop surfaces to authenticate between users. Further, many of these papers consist of authenticating only one user at a time (such as the phone "knowing" what person is using it based on their behaviors). However, the crux of the research made in this project, as is the main point of the papers of the related projects, is to surpass the challenge of having the device recognize the user simply by way of touching. This being a complex problem invites many different technologies and hardware to achieve this.

The most obvious thing to gather from these sources is the fact that this particular paper does not attempt to introduce a highly innovative or never-before-seen feature on touch devices. Instead, it aims to perfect one of the more complicated caveats to having multiple people using multitouch devices at once. Multiple examples were shown in the paper itself, and a very large number of related papers were found online independently. Many of these solutions often involved cumbersome methods that lessened user experience and flexibility. Some solutions required static positioning of users without any chance of going beyond a set range. Some of them required that sensors be attached to users for proper identification of users. Some of them attempted non-peripheral based solutions but were found to be highly inaccurate.

Another thing to note is that this particular usage of the technology does not employ entirely original algorithms. Some of the algorithms used in the recognition of different users were already invented by other researchers, and these were tweaked for use in this solution.

Evaluation

In order to evaluate the performance of the device, the researchers conducted several different tests, many of which included the condition that two users stand side by side and one user stand on the side of the touch surface. Because there were some restrictions on how the users were to manipulate the screen, namely the fact that several common touchscreen gestures were tweaked in order to make them easier to manipulate, there had to be a small amount of user "training" and the researchers evaluated how they did.

One of the more quantitative pieces of data gathered had to deal with how each user's index finger position and orientation changed with the position of the "cell", with each "cell" being defined as 9.1×6.2 cms in the tabletop surface. The program would highlight a cell and each of the three people (two on standing side by side of the table, one standing on the side perpendicular to the first two) would touch that cell, with the test program written by the researchers recording and mapping each of the finger positions and orientations. The results were as follows:

Note how each color corresponds to each finger, and how the fingers deviate slightly when going from one end of the cell mapping to the other. Most importantly, the researchers used this mapping as quantitative data to conclude that the program was indeed able to discern that this technology has the ability to distinguish clearly between each users' finger from any cell on the table. They used this to extrapolate to the conclusion that this means the accuracy of the technology extends to every minute point on the surface. In practice, they were able to acquire accuracy as high as 98% in simple situations and 92% in more challenging scenarios.

Because this project inherently involves the satisfaction of users, many of the evaluations were understandably qualitative. Testers were asked to perform a small amount of "training" to learn the new conditions and slightly tweaked gestures to perform common operations on multitouch surfaces, and they were later asked about their experience of the technology, as well as asking them about any potential discomfort in regards to the tweaked methods of input. The researchers concluded that the negative impacts on using this technology were minimal, since they gathered from the users that the adjustments required to make use of the technology were minimal (they needed only minutes to get accustomed to the change) and that the technology worked well for them the vast majority of the time without them having to actively guide the technology toward the desired result.

However, for a lot of other data gathered, mostly the accuracy percentages of how often the touch inputs were correctly attributed to their owner, the studies remained quantitative.

Discussion

I personally found this paper to be intriguing in the technique for solving the challenge of making a large touch surface available to multiple users at the same time. I think this particular implementation is highly versatile based on the fact that it has considerably less peripherals and less deviations in user input from the common input conventions, making it easier for the hardware manufacturers, the users, and the programmers alike. I think as more large surface touchscreens become commonplace, the desire for these devices to differentiate between multiple people is something that will inherently present itself. The less the input and user experience changes as a result of implementing this functionality, the better, and the paper has done a persuasive job in being convincing that this is among the most ideal solutions to this challenge.

Wednesday, August 29, 2012

Blog Entry #0

Dear lord! Who is that handsome devil who looks like he just stepped out of a beauty magazine?
Pictured: ME!
E-mail Address: ranierolg89@neo.tamu.edu
Class: 2nd year Senior

Why are you taking this class? I've always been interested about the fact that studying computing itself is only half the story. The way humans interact with it, the effects on technology in humans, and what we find out about ourselves from using said technology is another aspect that I'd love to know more about.

What experience do you bring to this class? Much like probably everyone else in the class, I started learning programming independently from books back in grade school, before the school was offering classes in that. I completed an internship two years ago at Cisco that really gave me a lot of insight as to how programming is done on a professional setting.

What are your professional life goals? I want to develop technology for the end consumer, because I want to see them directly having an impact on people's lives.

What are your personal life goals? I don't have many, other than moving a lot to many different places. I'd like a job that lets me do that.

What do you want to do after you graduate? Hopefully get into a good graduate program either here or somewhere else.

What do you expect to be doing in 10 years? I don't know, nor do I want to. I know "not planning ahead" is frowned upon, but I find that planning what you'll be doing even a decade from now is boring. Not knowing what's in store for you in the next decade keeps things spicy.

What do you think will be the next biggest technological advancement in computer science? I think medical prosthetics are poised to make a huge splash in technology in the next few years. Artificial limbs/organs still have a long way to go but I think the technology is finally starting to catch up with the demand.

If you could travel back in time, who would you like to meet and why? I know it's cheesy, but one of my grandparents passed away before I was born, so I'd like to go back in time and meet her when she was in good health.

Describe your favorite shoes and why they are your favorite? Sneakers. They're comfortable, airy, you can run in them just as well as you can walk in them.

If you could be fluent in any foreign language that you're not already fluent in, which one would it be and why? Probably an Asian language of some sort. Something about learning an entirely different alphabet sound appealing.

Interesting Fact: I've said this in other classes, but I'll just repeat myself here. You will most likely never meet another person with my same first name in your lifetime ;)