Tuesday, November 20, 2012

Opening Skinner's Box: Chapters 9 & 10

Chapter 9

This concept of brain linking as a physical manifestation of what humans recognize as "memory" I found to be the most intriguing concept introduced in this book. Of course, Chapter 10 goes much more into depth about the topic explored at the beginning of this chapter, with Harry's mental condition leading to brain surgery that "fixed" his problem but brought about an irreversible issue with long-term memory. I suppose I had always thought of the concept of "memory" to be what Slater described was the outdated notion: that the human mind was so esoteric and complex that memory isn't something that can be mapped to the brain, but rather "data" strewn haphazardly about the physical organ. I realize looking back that I most likely thought this was the case because memories are such an abstract concept of their own. I've viewed them as very "high level" operations of the brain, and I believed that assuming memory was "stored" somewhere felt relatively primitive. However, that proved to be the case, as Harry unfortunately showed.

Another aspect of the chapter that I found quite interesting is the concept of pills that allow one to "forget" things within a period of time. Slater did raise a very valid question about this pill, as it most likely has the ability to be used by aggressors on the victims rather than the victims themselves. At any rate, it's unlikely for there to be a "traumatic incident" in such a way that a person can fully forget about it if they forget the past 24 hours. Say a loved one is killed and one swallows one of these pills, what's the person to think a day later when they realize their loved one is missing? In the case of bodily injury, does it make any difference to forget the event itself when one must deal with the aftermath for months on end? Can't this have a detrimental or even outright malicious use in criminal cases, where a key witness can choose to "forget" and instantly ruin a case, or an aggressor deliberately using this as a way of destroying "evidence"? Of course, the technology behind this is very intriguing, but in all honesty it seems like its uses seem to be far more weighed toward malicious use than benign.

Chapter 10

This final chapter deals more with the physical brain aspect of psychology, which I found very intriguing. I find that psychology, while intriguing on its own, seems too subjective a science to hold my interest for long, but this entire concept of physically altering a person's brain to change one's mental composition is downright fascinating. This is one of the few instances that I've read in which a "high level" brain function such as personality can be linked directly with the brain, to such a degree that physically severing certain sections of the brain can remove these traits completely.

One issue that Slater only touches on that I find important to discuss, however, is the altering of what makes a person unique. Indeed, it's unlikely that anyone at all would consider brain surgery for anything other than what they consider an unbearable and uncontrollable deficiency, but should a person suffering from such a condition suddenly become "normal", is that one's personality changed as well? "Personality" can be altered by things even as subtle as mannerisms, can't this unfavorably "alter" the person this regard? What if brain surgery in some hypothetical future becomes commonplace. Would people changing their personalities en masse destroy themselves as people under a false pretense of chasing "perfection"? "Feeling low" is an effect that Slater observed for herself after watching an operation of this kind, where does this feeling come from exactly? Is it simply a fleeting aftereffect of the surgery, or is this the mind's response to having had something removed from it?

Thursday, November 1, 2012

Obendience to Authority

Obedience to Authority is a case study in the psychological flaw of following authority figures regardless of subject, authority figure, victim, or circumstance. More after break:

Thursday, October 18, 2012

Tuesday, October 16, 2012

Ethnography Ideas

One of the ideas that I was considering for my ethnography is the studying of one or more student athlete circles. I would study student athletes in the football, basketball, baseball, or any other team, their relationship between one another, the athletic staff, and how they juggle their academic and athletic endeavors. One of the things that I believe could be beneficial in this ethnography is the studying of several different teams from Texas A&M, although this wouldn't necessarily be feasible given the short two-week span of time we'll have for this assignment. It could be interesting to spot any differences or patterns of similarity in the way different teams behave.

Tuesday, October 9, 2012

Nonobvious Observations


The articles that we read on non-obvious observation yield a more nuanced view of the initial part of the ethnography project that we have begun. Essentially, to truly differentiate between people walking along different routes we must look into the more subtle details. We will not have access to the audio or any kind of visual that would make it extremely obvious to discern who the person is recording at the time, so the challenge remains in how we can make observations about people without getting any of the clues that we’d naturally look for.

Wednesday, October 3, 2012

Thoughts on Ethnography Articles


At a glance it might have seemed as if the field of Ethnography presented merely as an observational practice, one in which only a dry set of facts was to be presented. It initially appeared to me that the study on different cultures and the intangibility of particular customs and beliefs weren't going to provide much in the way of controversy or discussion, and thus my belief remained until I read the article on "Coming of Age in Samoa".

Tuesday, October 2, 2012

Comparison between "Emotional Design" and "Design of Everyday Things"

It's hard to remember that "Emotional Design" was written by Norman, the same author as "Design of Everyday Things" while reading it. Where Everyday was scientific, Emotional is subjective. Where Everyday appealed to the strict functionality of devices, Emotional throws it into a "gut-feel" blender to produce what simply looks nicest. In many ways it even appears as if the ideas between one book contradict with the other.

Wednesday, September 19, 2012

Design of Everyday Things: Reactions of book and Chapters

I think it's very easy to see why The Design of Everyday Things is extremely famous and influential. Despite being written in 1998, it manages to capture and articulate many abstract processes as to what makes good and bad design to the point where it really pioneered this kind of open talk about it. Considering the fact that poorly-designed products are still rampant in today's market, ranging from obscure devices to extremely well known hardware and software, it's easy to see how the book has managed to stay relevant and will continue to be so for quite some time.

Wednesday, September 12, 2012

Article Reaction: Chinese Room

I believe that the paper presented by John Searle presents a very compelling argument in asserting that the chief difference between doing and understanding is not derived from merely physically correct outputs. I completely agree that nothing in the way of window dressing, no lifelike moving robot, or frequent understanding of Chinese to the point of a native speaker believing the machine to actually know Chinese, or any change in the Chinese Room to make it more "brain-like", will change the fact that simulated understanding is still simulated and not real. What I'm personally interested is in knowing whether any kind of man-made AI is even possible as per the argument that he presented. He believes that none of the accepted notions at the time of strong artificial intelligence were actually strong, but he doesn't seem to present any metrics by which he would consider an implementation truly "understanding". He claims the process of understanding, will, beliefs, feelings, etc., are all inherent to the brain in incredibly complex, enigmatic ways that humans have not been able to understand. Does it mean that, to him, nothing will ever achieve it? Or does he believe that in some distant future people will eventually be able to divorce the true essence of mental understanding from the physical brain and actually make man-made intelligence that truly understands and not merely simulates it?

I personally believe so. I think that at some point in time, whether it be 5 years from now or 5,000, that the human will be able to extract this "essence of the human mind" from the human mind itself and artificially recreate it. If such is the case, I don't believe that Searle's argument would have any rebuttals to declare it a weak AI.

At the same time, however, I don't believe that this would be of trivial difficulty. It's obvious to say so, of course, based on the fact that despite all the current psycho- and physiological studies of the human brain that the species is nowhere near understanding the fundamentals of human rationality to the point where we can recreate it. But what I believe is that mankind will never be able to understand these fundamentals at one-hundred percent accuracy, first of all because there will be no way of immediately knowing if the models presented are perfect, and secondly because the concept is so complex that it will be easy to find flaws. We will most likely reach models much like the models of light, in that in some cases we treat lights as waves and in some cases we treat them as particles, because neither model is inclusive enough to fully describe all the phenomena that we have observed but at the same time, its exclusion ignores important behaviors that cannot be passed up. I wonder if Searle would make significant objections should mankind ever create a relatively convincing model of understanding based on two simultaneous systems. Would making an artificial intelligence based on two models at once that together constitute true understanding be considered a "simulation" by his logic? If so, would models like the light model previously mentioned be considered a "simulation" of understanding of light by Searle and not a real understanding?

Monday, September 10, 2012

Paper Reading #6: Playable Character: Extending Digital Games into the Real World

Playable Character: Extending Digital Games into the Real World is a paper introduced in CHI 2012 regarding the possibility of implementing real-world elements into gaming and how these can affect player interest. The authors of this paper were:


  • Jason Linder: Associated with the California College of the Arts, San Francisco, California, United States
  • Wendy Wu: Also associated with the same college as Dr. Linder. Has also collaborated with several other researchers in a variety of other topics including robotics, computer visualization, and animation.

Summary

The interest of this paper lies in the concept of alternate reality gaming, in which elements form the real world are pulled into a game and vice versa. The main reasoning for exploring this aspect in gaming is largely due to the use of interactive software as a form of entertainment that means something more than inputs vs. feedback. Some of the examples that were cited in the paper were the use of alternate reality gaming in education, military training, and political/social expression. This paper sought to investigate player feedback in how these elements of alternate reality gaming can improve their experience.

The main experiment was divided into five different parts, each with a different game prototype (all called "probes"), with the last being a full-fledged title developed with the results in mind. The first type was a city simulation concept that would task players with adding buildings via taking pictures of existing ones around the subjects' real life and evaluating how these buildings would work in their perceived cities. The second experiment involved having a game character improve via the performing of real-life activities, and evaluating whether these activities in any way were heightened in enjoyment by involving the game. The third experiment was set in an office setting with cubicles, as players were tasked with moving from one end of a hallway to another and recording the top people in three different categories: fastest, most points, most trips. The fourth experiment revolved around real-life office cubicles becoming "territory" that one could win and sell off for virtual profit via the completion of a variety of different tasks, from answering personal questions to performing helpful activities around the office to doing silly things as well. Player interaction and their reactions were recorded and evaluated.



The final game in the study was a much more elaborate title named "Forest". This game was developed in collaboration with with the San Francisco non-profit organization Friends of the Urban Forest. In this game, players were asked to become involved in Friends of the Urban Forest with the game app as part of a complement to the experience, with people earning virtual "leaves" (the game currency) via a variety of activities including documenting existing trees around the San Francisco area by taking pictures of them, identifying what kind they are, volunteering in efforts to plant new trees, among cite potential locations for the planting of said trees among other activities. A secondary part of the game that was conceptualized but not actually implemented was one in which the same app would let players spend the earned leaves to place the trees that the player "collected" (or rather, documented in real life) into a virtual forest, with the player required to keep said forest pruned and maintained. The concept was proposed to potential users and they were able to try out the completed app to give feedback.

Related Work

Currently, concepts of alternate-reality gaming have been researched thoroughly for a variety of different topics. The ten most relevant papers in this field include:

  1. Evaluating enjoyment within alternate reality games
  2. Participation, collaboration and spectatorship in an alternate reality game
  3. Alternate reality games: a realistic approach to gaming on campus?
  4. WeQuest: scalable alternate reality games through end-user content authoring
  5. An enjoyment metric for the evaluation of alternate reality games
  6. Designing alternate reality games
  7. Designing the future of collaborative workplace systems: lessons learned from a comparison with alternate reality games
  8. The ABC's of ARGs: Alternate Reality Games for Learning
  9. Alternate reality games and groupwork
  10. Game design for promoting counterfactual thinking
The work in the papers that were presented above shows a considerable interest in observing the concepts behind and reactions to Alternate-Reality Games. It can also be observed that several papers contained different applications that the researchers perceived would be useful in their respective fields, such as education, authoring, and collaboration. The related work that was found highlights that while this concept of the blending between Alternate-Reality Gaming and the idea of Gamification is not entirely new, it is still in its infancy and its effects continue to be explored by researchers constantly.

Evaluation

The evaluation of these experimental games were entirely subjective. There wasn't any attempt to present findings in an objective or empirical way, and no graphs of results or interpretations were provided to make up any data that was gathered. The authors seemed to be most focused toward how the people playing these games reacted and how receptive they were of the ideas that were presented to them.

The first game, the city simulation title, saw people interested in taking photos of buildings that they personally enjoyed, even though there tended to be no feedback as to how the buildings being cited would come together to create a cohesive city. The authors attributed this seeming lack of focus by the fact that the proposed game itself had no game play of its own. They postulated that people would create a more focused experience in general should a game be built around this concept, but they concluded that people were definitely interested in creating a city populated by buildings that they collected information about.

The second game, the adventure title with upgrades directly linked to real-world activities, had a relatively more mixed reception that varied in feedback depending on the activities that the players chose themselves. Some players showed a significant interest in performing everyday activities by pretending in the back of their mind that they were themselves the character. Boarding the subway as part of a "stealth" mission, for instance, would make players feel playfully sneaky even if they made no change in their outward behavior. By contrast, players who chose to perform tasks like reading newspaper articles would find the link between the game and real life considerably more detached, leading to a lessening of the enjoyment.

The third game, the office game surrounding the traversal of a hallway, proved popular with people to the point where the players were focused on the metric that was the "fastest trip". Eventually, people would sprint down the hallway and other people in the office would join in. Once it was observed that certain people held unbeatable records, the players moved on to the other two (most points and most trips) to populate that leaderboard.

The fourth game, the second office game surrounding the concept of territorial control of cubicles, was found very intriguing by many players and considered more popular than the third. People would often discuss strategies among themselves and would rarely outright lie to the computer to get more points, although some did skirt around the rules (questions involving posting of passwords, for instance, would have players post their password only to change their account password seconds later).

The Forest game that was developed in collaboration with Friends of the Urban Forest was found popular as well among people who were interested in the local flora as well as the people who were already involved. Overall, researchers found a general positive reception to all four of the probes as well as the Forest game, with some more successful than others, but it heightened the interest of the researchers in pursuing this concept more in the future.

Discussion

I personally think that the idea of alternate-reality gaming is one of the most entertaining and has the most potential of all types of gaming. Marrying the ideas between the virtual and the real can, for instance, be used to a strong effect when applied to education, as many students feel detached between schoolwork and their life outside of it. Alternate reality gaming can help change this perception and educate children much more easily through  much more engaging ways.

The concept of the Forest game I also found to be very intriguing, in that an app mixed with its own FarmVille-style game has great potential in helping people become involved with local charities and non-profits. I believe that this idea should be further explored with anyone who seeks to find increased interest and participation among a particular population.

Wednesday, September 5, 2012

Paper Reading #5: Understanding User Experience in Stereoscopic 3D Games

Understanding User Experience in Stereoscopic 3D Games is a paper that was presented in CHI 2012 and aims to explore how the user really reacts to a game played in stereoscopic 3D games via a psycho-physiological and subjective feedback metrics.

The authors of the work are the following:


  • Jonas Schild: Researcher in the Entertainment Computing Group in the University of Duisburg-Essen in Germany. Has many many previous studies gauging player experience related to gaming.
  • Joseph J. LaViola Jr.: An  Associate Professor of EECS, University of Central Florida, has made several studies regarding human interaction with different interfaces aside from gaming as well as several studies regarding computer graphics.
  • Maic Masuch: Like with Jonas Schild, Masuch is associated with the Entertainment Computing Group at the University of Duisburg-Essen in Germany. His studies focus on perspectives in gaming as well as computer graphics.

Summary

This study was focused with the gauging of experience regarding stereoscopic 3D. Users were provided stereoscopic 3D shutter glasses and PC titles on a standard high-definition television.



In terms of the experiment itself, the procedure and gathering of data was relatively straightforward, with most of the paper's focus being on the interpretation of said data. Users of all demographics, both experienced in gaming and inexperienced, were asked to describe their experience in the game for both 3D and non-3D through professional metrics such as the Game Experience Questionnaire, which measured Immersion, Flow, and Competence among other metrics. Psychophysiological data was measured using a headset named the NeuroSky MindSet that measured levels of attention and stress. The data that was acquired was then heavily analyzed using a variety of mathematical models and organized in a way to identify any possible patterns.

Related Work

Related papers in the field of player experience from 3D games were found to be the following:

  1. Designing for user experience: what to expect from mobile 3d tv and video?
  2. Navigation in 3D virtual environments: Effects of userexperience and location-pointing navigation aids
  3. Bringing VR and Spatial 3D Interaction to the Masses through Video Games
  4. Simulator sickness in virtual display gaming: a comparison of stereoscopic and non-stereoscopic situations
  5. Evaluating the Usability of an Auto-stereoscopic Display
  6. Virtual Reality: How Much Immersion Is Enough?
  7. Measuring Experiences in Gaming and TV Applications Investigating the Added Value of a Multi-View Auto-Stereoscopic 3D Display
  8. Visual Discomfort in Stereoscopic Displays: A Review
  9. Stereoscopic 3D film and animation: getting it right
  10. A study of visual fatigue and visual comfort for 3D HDTV/HDTV images
Some of the related papers were focused on presenting technologies of their own, rather than simply gauge the user experience of stereoscopic 3D. However, they also conducted studies that pertain to the study in this paper. Overall, it is clear that this new technology is being analyzed by these different studies as we begin to see a heightened inclusion of stereoscopic 3D in different media, particularly with video and gaming.

Evaluation

The data that was acquired from the experiment came from two different sources, the questionnaire provided to the testers and the data from the NeuroSky MindSet. The information acquired from one of the questionnaires is as follows:


From this data and its interpretation, the researchers were able to conclude that their findings pointed toward a higher level of motion sickness as well as increased spatial presence within the game itself. The researchers had previously hypothesized that the game itself was not going to have any significant effect, but the researchers found that the results showed the games did significantly alter these results.

The MindSet data was shown to be the following:


According to the researchers findings, they reported that the data pointed to lower mean Attention during  the gameplay, but this effect is not entirely clear on how it changed their conclusion because they mentioned there was no correlation between Attention and any metric in their experiment.

Discussion

Generally, I found this test to be interesting given the fact that stereoscopic 3D is poised to have a pervasive effect in home video over the next few years. Studies like these make are of great importance as people are gauged how well they react to these radically new methods of video output, especially in gaming where concentration and interpretation of video feedback is vital for a game's enjoyment. A large portion of the paper was devoted to the mathematical formats they followed to interpret this data, which I found nto to make a large difference in terms of informing the technology enthusiast, but for future works in this field other scientists could very well find it useful.

Paper Reading #4: Not Doing But Thinking: The Role of Challenge in the Gaming Experience

Not Doing But Thinking: The Role of Challenge in the  Gaming Experience is a research paper involving higher-level gaming concepts. It was presented in CHI in the year 2012. The publication was written by:

  • Anna L Cox: Senior lecturer in Computer Human Interaction and researcher at the University College London. Has written a large number of papers pertaining to high-level concepts in video games, with an emphasis on immersion.
  • Paul Cairns: Senior lecturer and researcher associated with the University of York. His past work focuses between gaming concepts and a wide variety of different topics concerning computer science.
  • Pari Shah: Associated with the department of Psychology & Language Sciences in the University College London. Research mostly pertains to psychology-related material.
  • Michael Carroll: Researcher in Department of Computer Science of the University of York.

Summary

This paper attempts to explore the role that challenge has in overall video game immersion, using three different experiments to qualitatively gauge player immersion with the independent variable being different tweaks to existing games to make them more challenging in different ways. They sought to answer the question whether more challenge meant more immersion in games, and if so, in what way.



Using a pool of several students ranging from 20 to 41 in number, the first experiment consisted of a typical tower-defense game whose tweaks were primarily geared toward "physical challenge". Essentially, this meant that more clicks and physical movement was required from the player as opposed to the same game that did not demand as many inputs from the user, and player immersion was based on this change in design. This was balanced in such a way that the more physically demanding version was not deliberately harder or broke the game. The second experiment related to time pressure, using the popular game Bejeweled to alternate between a timed mode and a regular un-timed mode. This was used to gauge how players felt immersed under a more stringent parameter of playing. The third experiment was a combination of time pressure and expertise in the game Tetris to gauge how the player was immersed under these two pressures.

Related Works

The works that I was able to find that related to the topic of video game immersion on dependent factors were numerous and wide. The following were the ten most relevant papers on the subject:
  1. Effects of different scenarios of gamedifficulty on player immersion
  2. The Case for Dynamic Difficulty Adjustment in Games
  3. Video games: Perspective, point-of-view, and immersion
  4. Measuring and defining the experience of immersion in games
  5. Extending Reinforcement Learning to Provide Dynamic Game Balancing
  6. Designing Action Games for Appealing to Buyers
  7. AI for Dynamic Difficulty Adjustment in Games
  8. Difficulty Scaling of Game AI
  9. Dynamic Difficulty Controlling Game System
  10. Emotion Assessment From Physiological Signals for Adaptation of Game Difficulty
All ten of these papers had at least some significant portion of its study regarding immersion, and particularly things that affected said immersion and how it can be measured to achieve a desired results. In the case of some of the papers, this was explored in order to be able to construct a system or particular game based on what factors affected immersion, among many other aspects explored in each study.

This shows that many of the concepts that the particular paper was exploring are not particularly novel or new. However, this study was able to make an empirical observation (albeit consisting of qualitative evidence) on what factors affected immersion and attempted to gauge to what extent these factors affected it. Since immersion is one of the most desired effects from a developers' perspective, I found the vast amount of interest in the subject not particularly surprising.

Evaluation

The evaluation of the first experiment was dependent on whether the immersion was heightened, lessened, or remained the same as the game became more physically demanding. The results can be most accurately summarized with the following two graphs:



As can be seen, the actions and performance between the low effort (less demanding game) and the higher effort one had marginal difference in terms of player performance, but most importantly, had next to no difference in terms of player's perception of immersion. The second graph presents the result in a more visual way. The researchers eventually concluded this since the data reflected the lack of change.

The second experiment of adding a time constraint to the game Bejeweled yielded the following results:

The results in the table showed that the total immersion of the players was significantly increased with the addition of time pressure. In terms of scoring, the researchers found that the scores for both "No Time Pressure" and "Time Pressure" were relatively the same. This served as an example where time pressure did add to the immersion of the game, which contrasted with the results of the first experiment which added more physically demanding controls.

The third experiment consisted of adding both time pressure and a higher game challenge. The results were as follows:


The results only show the level of challenge factor scores between the lower level play and the higher level. However, the researchers made a link between the level of challenge and how it influences player immersion. They concluded that it is likely that difficulty affects player immersion.

Discussion

I personally found this study interesting, if only a little subjective in its overall conclusions. Of particular note is the gauging of "immersion" as an objective number. While it's possible to tell whether a player is immersed in a game or not, gauging the interest level can only, at best, be qualitative. Therefore, the study tends to move more toward a general psychological test rather than an empirical extraction of data. What this means is that different users can gauge their own immersion in different ways, and the ways the games were constructed as well as their own inherent "immersive" factors most likely contribute as well. The fact that the experiment used three entirely different games for each experiment results in a less clear pattern.

Tuesday, September 4, 2012

Paper Reading #3: Using Rhythmic Patterns as an Input Method

Using Rhythmic Patters as an Input Method is a paper pertaining to a consumer's ability of being able to produce rhythmic taps on their phone, and the exploration of how these taps can be used as an alternate method of input to supplement traditional icon-touching controls.


Summary

Tests were created by writing custom software that allowed users to "learn" how to tap rhythmically using three methods: audio, visual, or both. The program would picture an expected form of input in the form of bars with two widths. The thin width would indicate a rest moment while the thick line would indicate that the user needs to tap. The longer this thicker width is, the longer the tap needs to be held.

Diagram of taps as presented in the test app
One facet that the researchers considered important was the fact that the average user without any musical knowledge should be able to make use of this tapping as a form of input. Because of this, many of the subjects they gathered had no previous musical training or concept of tempo, beats, and how taps relate to both. These were concepts that the test app attempted to teach.

To help teach these concepts, three methods were employed. One method consisted simply of teaching via sound cues, the other attempted to teach and ask for input via visual cues, with the rhythm diagrams changing color from left to right in synchronization with the tempo. The third consisted of both at the same time. Accuracy of the taps correctly inputted with the tempo was measured as quantitative data.

Related Works

The works related to using tapping for input in electronics were were found to be the following papers:

  1. TapSongs: Tapping Rhythm-Based Passwords on a Single Binary Sensor
  2. Tok!: A Collaborative Acoustic Instrument using Mobile Phones
  3. RhythmLink: Securely Pairing I/O-Constrained Devices by Tapping
  4. Polyrhythm Hero: A multimodal polyrhythm training game for mobile phones
  5. Evaluation of a rhythm based user authentication system for mobile devices
  6. Exploiting rhythmic sense in a ringtone composer
  7. Finger tapping measurement system design on mobile devices
  8. Tap input as an embedded interaction method for mobile devices
  9. Gestures all around us: user differences in social acceptability perceptions of gesture based interfaces
  10. Instrumented Usability Analysis for Mobile Devices 
The main piece of information I extracted from these related works is that rhythmic tapping as a form of input device has been explored and researched extensively, being used in a variety of applications, many of which are already slated to be in use soon. The cell phone company Research In Motion, for instance, has considered using rhythmic tapping as a form of password. Another paper asserted tapping as a useful tool for learning rhythms through phone apps. Another found that one can create custom unique ringtones using rhythmic tapping to let the user know that it is his or her phone that is ringing via this method of input. Essentially, this is a technology that has essentially advanced enough to make it to the mainstream consumer space.

Evaluation

Quantitatively, the results of the success rate were around 64.3% percent, which the paper makes a point to emphasize that the program used for the testing was especially strict in what it considered a tap in the expected tempo. They noted that the lengths of the taps and beats themselves created an essential difficulty for the user, with particular combinations having the user be more prone to error than others.

Graph detailing success rate depending on different tap and beat tempos
In addition, the test measured the success rate depending on the type of condition that was used on the test itself, varying between visual, audio, both or neither. It was qualitatively concluded that having users input the taps that were given by themselves with no additional help from the test app resulted in the worst success rates, while Audio and AudioVisual tended to be the most successful ones.

Qualitatively, the researchers asked the subjects their opinion on certain aspects of the exam. Of note was the fact that they personally expressed belief that the AudioVisual condition tended to be the most confusing for them, as they saw that it was overwhelming to focus on both at once. Additionally, they tended to prefer the audio feedback.

From both these types of data, the researchers concluded that there's significant potential in the use of rhythmic tapping among people who are not musically trained for it. They conclude that this makes certain things possible like teaching people Morse code and using tapping to communicate with it, as well as using certain rhythms to make inputs to the cell phone without actually looking at it.

Discussion

Overall, I found this paper to be interesting because it let me realize that there was another facet of touch-based input that I had not previously thought about. I think something like this can successfully curb dangerous situations such as texting while driving if users are provided the choice to be able to communicate via tapping without having to look down on the phone at all.

Understandably, making rhythmic taps be read by software isn't particularly complicated, but what this paper explores is the possibility that an app might be able to teach new users how to tap reliably. With a 64% rate of success in terms of exact matches with tempo and taps inputted, this shows that the method presented can be successful in introducing a new dimension in phone input.

Monday, September 3, 2012

Paper Reading #2: Pet Video Chat: Monitoring and Interacting with Dogs over Distance

Pet Video Chat: Monitoring and Interacting with Dogs over Distance is a research paper presented at CHI 2012 that deals with the maintaining of interactions between pets and their owners remotely. The researchers involved in this work are:


  • Jennifer Golbeck: Associated with the University of Maryland in the Human-Computer Interaction research laboratory in College Park, Maryland.
  • Carma Neustaedter: Associated with the Simon Fraser University in the School of Interactive Arts + Technology in Vancouver, Canada

Summary

The monitoring system that the researchers used was a rudimentary implementation consisting of a previously-written program combined with Skype. The written program was one in which the remote user is able to control a program designed to attract the attention of pets through three different applications.

Pictured: Poochy-Vision
The first is the Sound Panel interface, in which the user is can choose from twelve different sounds such as rubber duckies, whistling, dog chew toys, meowing cats and howling dogs. The second is a "virtual laser pointer" interface in which the owner can manipulate a bright red dot on a black background remotely. The third is a "Tadpole interface" in which an animation of a tadpole swimming back and forth independently from any user input is shown. Additional sound audio was transmitted for voice communication between the owner and the pet, although video was not used.

The owners were tasked with setting up two computers, one in an area that's easily visible and frequently visited by the pet, and the other in a different room in the house for the owners' use. They were also tasked with some preliminary conditioning of the dogs to determine which of the sounds the dogs responded the best to as well as trying to get them to pay attention to the laser pointer and the tadpole. They were then instructed to leave the room and begin remote communication.

Related Works

Considering the rather odd nature of this particular paper, it was challenging to find at least ten different prior works that deal with this particular technology presented. Nonetheless, the relevant papers that were found were as follows:

  1. Pet Internet and Huggy Pajama: A Comparative Analysis of Design Issues 
  2. Computer Mediated Remote Touch Communication for Humans and Animals
  3. PlayPals: Tangible Interfaces for Remote Communication and Play
The first paper in particular is concerning itself not only with remote communication, but also attempts to establish physical contact remotely between a remote owner and his or her pet via the pet wearing a lightweight jacket "embedded with vibrotactile actuators" that would contract based on how and where the user is touching his or her device remotely. The second paper explores current existing technology related to remote communication between animals and humans, indicating that at the very least this kind of research remains ongoing despite being in its infancy. The third deals with the construction of interfaces (mainly remotely manipulated figurines) to communicate between children. This indicates that remote interactions in this fashion have been attempted before.

Evaluation

Since the test subjects of importance are dogs, it is apparent that the researches had difficulties in presenting their results in purely quantitative results. They did, however, manage to create a table of data outlining "subject ratings" that show levels of pet engagements with the remote visual and audio stimuli.


As can be seen from the table itself, the researchers were able to use this along with qualitative observations of pet reactions to conclude that the laser and the tadpole tests were considerably less engaging overall to a pet attempted to be stimulated remotely. However, the sound engagement and the audio engagement from the voice of the owner remained as the two most reliable ways to engage in the pet.

Overall, the researchers were able to conclude that the experiment resulted in successful stimulation of the pet from a remote location, encouraging the pet to look directly in the owner's line of vision on the laptop's camera, letting the owner see the pet itself remotely. This created a relatively rudimentary, albeit successful line of communication between both the pet and the owner. The researchers believe that although not every test proved successful, the reactions of the animals yielded the belief that it is possible to communicate reliably between owner and pet remotely such that this field of research would be promising.

Discussion

This paper in particular intrigued me since I read the list of papers at the CHI website. The idea that someone would be able to communicate in some way with his or her pet remotely intrigued me, and I was curious to know how the created the technology to achieve this. The conclusion that audio was more engaging to dogs than video seems to make sense upon reflection--a dog instinctively is much more likely to respond in audio cues rather than a small (relatively speaking to the dogs' everyday vision) screen with rudimentary representations of a tadpole and a red laser. I understandably found that the research in this particular area is not too populated with many papers, although I was surprised to find just how little relevant research has been made in this field. The concept may sound silly at first, though with increased research in technology and the pet psychology of how it reacts to remote cues could yield very interesting results.

Thursday, August 30, 2012

Paper Reading #1: See me, see you: a lightweight method for discriminating user touches on tabletop displays


See Me, See You was a concept paper presented at this year's CHI conference. Its main focus was in overcoming the limitations and pitfalls related to incorporating multiple users into large touch-based surfaces. The authors were:


  • Hong Zhang - worked on a paper that focused on evaluating and introducing new one-handed gestures for touch-based phones. It was written in collaboration with Pourang Irani (also an author of this paper) in addition to other authors. Affiliates with the University of Manitoba.
  • Xing-Dong Yang - Has a large history of 13 other papers published that cover a wide variety of different topics and fields of research, though many of them focus on touch-based inputs and improving or including additional functionality to user interaction with touch-based features. Is affiliated with the University of Alberta.
  • Barrett Ens - Has written one other paper based on the concept of "off-screen" pointing, essentially having a touch device tracking finger "pointing" outside of the normal viewport of the device. Like Hong Zhang, Barrett Ens affiliates with the University of Manitoba.
  • Hai-Ning Liang - Has published only this paper and is affiliated with the University of Manitoba as well.
  • Pierre Boulanger - Highly proficient researcher who has collaborated on 57 other papers of highly varied topics.. Associates with the University of Alberta.
  • Pourang Irani - Has published only this paper as well and like most of the other authors, associates with the University of Manitoba.



Summary

On a general level, the project recognizes and attempts to correct the problems surrounding incorporating multiple users into a single, large device. One of the largest issues that the authors took into consideration was the fact that many of the solutions for adding multiple-user functionality in large touch-based screens often involved dedicated additional peripherals that both added to the cost of the devices as well as decreased user enjoyment with added burden of keeping up with these peripherals that had their own "rules" for usage. The basic concept of See Me, See You is essentially implementing the functionality that requires as little extra effort both on the developer/manufacturer side and the user as possible.
The picture on the left depicts the end user functionality with this system in place. As it can be seen, each user has their own "brush" that they each individually employ on the picture itself. None of the individual fingers are "confused" with each other. The paper repeatedly stated that this solution was accurate for both the large and the small distances that users may have.

The way the system works is the following: essentially, the implementation will employ a "learning system" in which the hardware used will be able to "learn" the users and adapt to them with minimal direct user input in this regard. The user will place their finger on the surface and the program will employ cameras and senors to begin a "profile" that contains each individual hand. The program will make a distinction between different hands, but most importantly, it will make a distinction between different finger orientation.


This solution in particular takes into account the angle at which the finger is placed based on the location of the hand's palm. This will make it easy for the program to learn where the user is placed in relation to the other users. Essentially, it will see if an hand with an index finger stretched out to touch the surface is oriented in the front, on the side, or across other users. From this, the user will be able to touch any other place in the screen without worrying about "overlapping" with other users, a phenomenon in which the surface will mistakenly believe that two fingers that are relatively close to each other are actually from the same user. The system will thus be able to accurately recognize where each user is interacting with the screen


The system will apply the scanned "hand" fingers and will attempt to use a predetermined "prediction" chart that will identify how the hand should look like from any user at any point on the surface. It will then be able to make a distinction between which person will be touching that particular area. Each area is divided into smaller "cells" for easier distinction by the system.



Related Works


Papers that I personally found to be of relevance to the material are:
  1. Touch me once and I know it’s you! Implicit Authentication based on Touch Screen Patterns
  2. Interactive Gesture-based Authentication for Tabletop Devices
  3. Multi-Touch Authentication on Tabletops
  4. The IR Ring: Authenticating Users’ Touches on a Multi-Touch Display
  5. Biometric-Rich Gestures: A Novel Approach to Authentication on Multi-touch Devices
  6. Spatial Authentication on Large Interactive Multi-Touch Surfaces
  7. Authenticated Tangible Interaction using RFID and Depth-Sensing Cameras
  8. Performance Enhancement of Large-Size NFC Multi-Touch System
  9. Using Mobile Phones to Spontaneously Authenticate and Interact with Multi-Touch Surfaces
  10. pPen: Enabling Authenticated Pen And Touch Interaction on Tabletop Surfaces

These papers were selected for a variety of reasons, though they all have in common some kind of authentication functionality on multi-touch devices. Some of these papers do not consist of specifically using tabletop surfaces to authenticate between users. Further, many of these papers consist of authenticating only one user at a time (such as the phone "knowing" what person is using it based on their behaviors). However, the crux of the research made in this project, as is the main point of the papers of the related projects, is to surpass the challenge of having the device recognize the user simply by way of touching. This being a complex problem invites many different technologies and hardware to achieve this.

The most obvious thing to gather from these sources is the fact that this particular paper does not attempt to introduce a highly innovative or never-before-seen feature on touch devices. Instead, it aims to perfect one of the more complicated caveats to having multiple people using multitouch devices at once. Multiple examples were shown in the paper itself, and a very large number of related papers were found online independently. Many of these solutions often involved cumbersome methods that lessened user experience and flexibility. Some solutions required static positioning of users without any chance of going beyond a set range. Some of them required that sensors be attached to users for proper identification of users. Some of them attempted non-peripheral based solutions but were found to be highly inaccurate.

Another thing to note is that this particular usage of the technology does not employ entirely original algorithms. Some of the algorithms used in the recognition of different users were already invented by other researchers, and these were tweaked for use in this solution.

Evaluation

In order to evaluate the performance of the device, the researchers conducted several different tests, many of which included the condition that two users stand side by side and one user stand on the side of the touch surface. Because there were some restrictions on how the users were to manipulate the screen, namely the fact that several common touchscreen gestures were tweaked in order to make them easier to manipulate, there had to be a small amount of user "training" and the researchers evaluated how they did.

One of the more quantitative pieces of data gathered had to deal with how each user's index finger position and orientation changed with the position of the "cell", with each "cell" being defined as 9.1×6.2 cms in the tabletop surface. The program would highlight a cell and each of the three people (two on standing side by side of the table, one standing on the side perpendicular to the first two) would touch that cell, with the test program written by the researchers recording and mapping each of the finger positions and orientations. The results were as follows:

Note how each color corresponds to each finger, and how the fingers deviate slightly when going from one end of the cell mapping to the other. Most importantly, the researchers used this mapping as quantitative data to conclude that the program was indeed able to discern that this technology has the ability to distinguish clearly between each users' finger from any cell on the table. They used this to extrapolate to the conclusion that this means the accuracy of the technology extends to every minute point on the surface. In practice, they were able to acquire accuracy as high as 98% in simple situations and 92% in more challenging scenarios.

Because this project inherently involves the satisfaction of users, many of the evaluations were understandably qualitative. Testers were asked to perform a small amount of "training" to learn the new conditions and slightly tweaked gestures to perform common operations on multitouch surfaces, and they were later asked about their experience of the technology, as well as asking them about any potential discomfort in regards to the tweaked methods of input. The researchers concluded that the negative impacts on using this technology were minimal, since they gathered from the users that the adjustments required to make use of the technology were minimal (they needed only minutes to get accustomed to the change) and that the technology worked well for them the vast majority of the time without them having to actively guide the technology toward the desired result.

However, for a lot of other data gathered, mostly the accuracy percentages of how often the touch inputs were correctly attributed to their owner, the studies remained quantitative.

Discussion

I personally found this paper to be intriguing in the technique for solving the challenge of making a large touch surface available to multiple users at the same time. I think this particular implementation is highly versatile based on the fact that it has considerably less peripherals and less deviations in user input from the common input conventions, making it easier for the hardware manufacturers, the users, and the programmers alike. I think as more large surface touchscreens become commonplace, the desire for these devices to differentiate between multiple people is something that will inherently present itself. The less the input and user experience changes as a result of implementing this functionality, the better, and the paper has done a persuasive job in being convincing that this is among the most ideal solutions to this challenge.

Wednesday, August 29, 2012

Blog Entry #0

Dear lord! Who is that handsome devil who looks like he just stepped out of a beauty magazine?
Pictured: ME!
E-mail Address: ranierolg89@neo.tamu.edu
Class: 2nd year Senior

Why are you taking this class? I've always been interested about the fact that studying computing itself is only half the story. The way humans interact with it, the effects on technology in humans, and what we find out about ourselves from using said technology is another aspect that I'd love to know more about.

What experience do you bring to this class? Much like probably everyone else in the class, I started learning programming independently from books back in grade school, before the school was offering classes in that. I completed an internship two years ago at Cisco that really gave me a lot of insight as to how programming is done on a professional setting.

What are your professional life goals? I want to develop technology for the end consumer, because I want to see them directly having an impact on people's lives.

What are your personal life goals? I don't have many, other than moving a lot to many different places. I'd like a job that lets me do that.

What do you want to do after you graduate? Hopefully get into a good graduate program either here or somewhere else.

What do you expect to be doing in 10 years? I don't know, nor do I want to. I know "not planning ahead" is frowned upon, but I find that planning what you'll be doing even a decade from now is boring. Not knowing what's in store for you in the next decade keeps things spicy.

What do you think will be the next biggest technological advancement in computer science? I think medical prosthetics are poised to make a huge splash in technology in the next few years. Artificial limbs/organs still have a long way to go but I think the technology is finally starting to catch up with the demand.

If you could travel back in time, who would you like to meet and why? I know it's cheesy, but one of my grandparents passed away before I was born, so I'd like to go back in time and meet her when she was in good health.

Describe your favorite shoes and why they are your favorite? Sneakers. They're comfortable, airy, you can run in them just as well as you can walk in them.

If you could be fluent in any foreign language that you're not already fluent in, which one would it be and why? Probably an Asian language of some sort. Something about learning an entirely different alphabet sound appealing.

Interesting Fact: I've said this in other classes, but I'll just repeat myself here. You will most likely never meet another person with my same first name in your lifetime ;)