Exploring Issues of User Model Transparency and Proactive Behaviour in an Office Environment Control...

39
DOI 10.1007/s11257-005-1269-8 User Modeling and User-Adapted Interaction (2005) 15:235–273 © Springer 2005 Exploring Issues of User Model Transparency and Proactive Behaviour in an Office Environment Control System KEITH CHEVERST, HEE EON BYUN, DAN FITTON, CORINA SAS, CHRIS KRAY and NICOLAS VILLAR Department of Computing, Lancaster University, Lancaster, LA1 4YR, UK. e-mail: [email protected] (Received: 4 April 2004; accepted in revised form: 5 June 2005) Abstract. It is important that systems that exhibit proactive behaviour do so in a way that does not surprise or frustrate the user. Consequently, it is desirable for such systems to be both personalised and designed in such a way as to enable the user to scrutinise her user model (part of which should hold the rules describing the behaviour of the system). This article describes on-going work to investigate the design of a prototype system that can learn a given user’s behaviour in an office environment in order to use the inferred rules to populate a user model and support appropriate proactive behaviour (e.g. turning on the user’s fan under appropriate conditions). We explore the tension between user con- trol and proactive services and consider issues related to the design of appropriate trans- parency with a view to supporting user comprehensibility of system behaviour. To this end, our system enables the user to scrutinise and possibly over-ride the ‘IF-THEN’ rules held in her user model. The system infers these rules from the context history (effectively a data set generated using a variety of sensors) associated with the user by using a fuzzy-decision- tree-based algorithm that can provide a confidence level for each rule in the user model. The evolution of the system has been guided by feedback from a number of real-life users in a university department. A questionnaire study has yielded supplementary results con- cerning the extent to which the approach taken meets users’ expectations and requirements. Key words. context history, intelligent environment, inference, machine learning, proactive behaviour, prototype deployment, scrutability 1. Introduction and Motivation Over the past decade, one of the predominant trends in computing has been ubiq- uitous computing, the term having first been proposed by Weiser in the early 1990s. His vision included the notion of increasing the productivity or welfare of a user situated in a computer-everywhere environment by supporting human assistance in an intimate way (Weiser, 1991). One research domain that requires the com- puter-everywhere model of ubiquitous computing is that of the intelligent environ- ment (Coen, 1998). In this domain, a wide range of physical devices (e.g., lights, audio/video equipment, heaters, air conditioning equipment, windows, curtains,

Citation preview

DOI 10.1007/s11257-005-1269-8User Modeling and User-Adapted Interaction (2005) 15:235–273 © Springer 2005

Exploring Issues of User Model Transparencyand Proactive Behaviour in an Office EnvironmentControl System

KEITH CHEVERST, HEE EON BYUN, DAN FITTON, CORINA SAS,CHRIS KRAY and NICOLAS VILLARDepartment of Computing, Lancaster University, Lancaster, LA1 4YR, UK.e-mail: [email protected]

(Received: 4 April 2004; accepted in revised form: 5 June 2005)

Abstract. It is important that systems that exhibit proactive behaviour do so in a way thatdoes not surprise or frustrate the user. Consequently, it is desirable for such systems tobe both personalised and designed in such a way as to enable the user to scrutinise heruser model (part of which should hold the rules describing the behaviour of the system).This article describes on-going work to investigate the design of a prototype system thatcan learn a given user’s behaviour in an office environment in order to use the inferredrules to populate a user model and support appropriate proactive behaviour (e.g. turningon the user’s fan under appropriate conditions). We explore the tension between user con-trol and proactive services and consider issues related to the design of appropriate trans-parency with a view to supporting user comprehensibility of system behaviour. To this end,our system enables the user to scrutinise and possibly over-ride the ‘IF-THEN’ rules heldin her user model. The system infers these rules from the context history (effectively a dataset generated using a variety of sensors) associated with the user by using a fuzzy-decision-tree-based algorithm that can provide a confidence level for each rule in the user model.The evolution of the system has been guided by feedback from a number of real-life usersin a university department. A questionnaire study has yielded supplementary results con-cerning the extent to which the approach taken meets users’ expectations and requirements.

Key words. context history, intelligent environment, inference, machine learning, proactivebehaviour, prototype deployment, scrutability

1. Introduction and Motivation

Over the past decade, one of the predominant trends in computing has been ubiq-uitous computing, the term having first been proposed by Weiser in the early 1990s.His vision included the notion of increasing the productivity or welfare of a usersituated in a computer-everywhere environment by supporting human assistancein an intimate way (Weiser, 1991). One research domain that requires the com-puter-everywhere model of ubiquitous computing is that of the intelligent environ-ment (Coen, 1998). In this domain, a wide range of physical devices (e.g., lights,audio/video equipment, heaters, air conditioning equipment, windows, curtains,

236 KEITH CHEVERST ET AL.

etc.) can be controlled automatically on the basis of the context of a houseor office and the preferences of the inhabitants (e.g., preferred temperature, pre-ferred light level, energy consumption policies, etc.). One promising technique forachieving such intelligent environments is context-aware computing. Context-awaresystems have been defined by Dey and Abowd (2000) as:

“systems [that] adapt according to the location of the user, the collection of nearbypeople, hosts, and accessible devices, as well as to changes to such things over time”.

However, as is described by Cheverst et al. (2001), when designing context-awaresystems designers need to be conscious of a number of potential pitfalls in orderto ensure that the context-aware system maintains appropriate levels of predictabil-ity and provides sufficient transparency to enable users to trust the behaviour ofthe system, to understand what adaptation strategies are in place, and to over-ridethese adaptation strategies in those circumstances where such user control is war-ranted. As is noted by Jameson et al. (2004), the term transparency is an over-loaded term in the literature on human-computer interaction. For the remainderof this article, we will use the more specific term comprehensibility to suggest thatthe user:

“ . . . can look through the outer covering (e.g. a glass box) to examine the innerworkings of the device.”

Concerning this need for comprehensibility, Abowd and Mynatt (2000) makesimilar comments when discussing key challenges in the ubicomp domain:

“One fear of users is the lack of knowledge of what some computing system isdoing, or that something is being done ‘behind their backs”’.

Unfortunately, one implication of signalling to the user that the system is ‘doingsomething behind their back’ is the potential for inappropriate interruption of theuser’s current task or activity. The negative impact of interruptions on task perfor-mance, particularly due to the shift of attention and memory load, has been widelydocumented in various studies. Despite the impact of interruptions on task perfor-mance, it has been hypothesised that a well-designed user interface may compen-sate for such negative effects and exploit the potential of multitasking which, withappropriate support, people are capable of. For an excellent review of this worksee McFarlane and Latorella (2002).

Scrutability refers to the ability of a user to interrogate her user model in orderto understand the system’s behaviour. In relation to ubicomp environments, Kayet al. (2003) describe how:

“. . . when the user wants to know why systems are performing as they are or whatthe user model believes about them, they should be able to scrutinise the model andthe associated personalisation processes.”

Relating scrutability to the issue of control, Kay et al. (2003) also state that:

EXPLORING ISSUES OF USER MODEL TRANSPARENCY 237

“. . . one of the important requirements of ubiquitous computing, [is] that of ensuringuser control over the model”

and

“We see scrutability as a foundation for user control over personalization”.

In this article, we describe our formative work on exploring the comprehensibil-ity, scrutability and control issues associated with the on-going development andevaluation of an Intelligent Office System, hereafter referred to as IOS. The sys-tem can learn a given user’s behaviour in an office environment in order to useinferred rules to populate his or her user model and support appropriate proac-tive behaviour, such as turning on the user’s office fan under appropriate contex-tual conditions (Byun and Cheverst, 2004). When we use the term proactive, weagree with the understanding of the term presented by Salovaara and Oulasvirta(2004) whereby:

“Proactive systems adhere to two premises: 1) working on behalf of, or pro, the user,and 2) acting on their own initiative.”

Our investigation is interested in uncovering and exploring:

• The possible techniques and surrounding issues for making the behaviour ofthe system comprehensible to the user and the impact this can have on theselection of underlying machine learning approach.

• Issues surrounding how users can be given appropriate levels of control whilestill avoiding undue levels of interruption.

The IOS infers its rules from the context history (effectively a data set generatedusing a variety of sensors) associated with the user. Part of our original motiva-tion was to determine whether it would be possible for context history to be usedto infer the nuances of a given user’s behaviour. For example, with regard to thetemperature inside an office, there may be numerous ways in which a user mightattempt to cool the temperature. She might open the door (if it is cooler outside),close the blind (if strong sunlight is shining into the office), turn on the fan (if sheis not engaged in an activity where the noise of the fan would be distracting), opena window (if it is not too noisy outside) etc. Our hypothesis was that by havingan appropriate set of sensors recording context such as office temperature, level oflight, etc., it would be possible for a system to learn the user’s favoured approachfor controlling the office temperature and then (to an extent acceptable to the user)automate the process, e.g. by turning on the fan.

The system that we have developed learns from a context history using either astandard or a fuzzy decision-tree-based algorithm; the latter is capable of provid-ing the user with an idea of the confidence level associated with the inferred rules.Adopting this decision tree approach enables users to inquire into the behaviourof the system by being allowed to scrutinise and possibly override the ‘IF-THEN’

238 KEITH CHEVERST ET AL.

type rules held in her user model. Currently, a number of users in a universitydepartment are running the system (the software for which is publicly available) inorder to explore the feasibility of the approach and user acceptance. We have alsocarried out a questionnaire-based survey (completed by 30 participants) in orderto obtain further insights into the levels of comprehensibility and control soughtby its users.

The remainder of this article is structured as follows. In the following section,we discuss related work in this particular application domain. Next, in Section3, we discuss in detail our approach both in terms of research methodology andtechnical decisions (including our use of decision tree learning and use of fuzzylogic). The IOS is of course composed of both software and hardware components(not forgetting the user). In Section 4, we describe the hardware used by the sys-tem to sense the user’s environment and actuate supported devices. Then Section5 describes, by way of a walkthrough, the user interface to the IOS. In Section 6,we describe an analysis of the findings to date, including lessons learned and theresults of the questionnaire-based user survey. The penultimate section on futurework is followed by a section presenting a summary and concluding remarks.

2. Related Work

This section initially outlines some representative proactive systems developed forhome environments, which particularly address the issues of comprehensibility andcontrol. We continue by describing research projects which specifically focus on therelationship between user control and service automation. Finally, we discuss issuesrelated to interruptibility (arsing in the context of proactive behaviour).

2.1. user control in automated/context-aware systems

The aim of the Adaptive House project (Mozer and Miller, 1998) at the Uni-versity of Colorado, Boulder, was to make a home that can adjust its functionsto the schedules and lifestyles of the inhabitants. A house in Boulder, Coloradowas fitted with more than 75 sensors capable of sensing the physical environ-ment in the house. Data were gathered, for example, on room temperature, lightlevel, sound level, and door and window positions. The developers combined twomachine learning algorithms in order to realise adaptive behaviour. First, an arti-ficial neural network attempts to predict the likely location of an inhabitant inthe next two seconds. Second, a reinforcement learning algorithm determines anaction (e.g., turning lights on/off and setting their intensities) that has the min-imum expected discounted cost with respect to the prediction of the inhabitants’location. Unlike our approach, the behaviour of the Adaptive House control sys-tem cannot be scrutinised, nor can the user explicitly influence it.

The EasyLiving Project (Brumitt et al., 2000) was concerned with the devel-opment of an architecture and suitable technologies to enable typical PC-focused

EXPLORING ISSUES OF USER MODEL TRANSPARENCY 239

activities to move from a fixed desktop into the environment as a whole. Theproject focused on the technologies of middleware, geometric world modelling,perception, and service description. Furthermore, the project team implementedseveral applications to operate within an intelligent space where a user modelof each person is maintained, storing, for example, the user’s physical appear-ance, authentication instructions, and also personal preferences. One application,‘Media Control’, was capable of proactively playing various media types (for exam-ple, a CD, MP3, DVD or Videotape) based on the user’s preferences and currentlocation.

Intille (2002) designed the Changing Places/House n, a full-scale single-familyhome with an integrated and ubiquitous sensor architecture, as a model home forthe future. The house was outfitted with a computer-controlled heating, ventilat-ing, air-conditioning (HVAC) system, but it was not used to automate fully controlof the condition of the environment. Instead, the system provides subtle remindersto the inhabitants to perform a certain action. For example, the system will notopen the window by itself but will turn on a tiny light on the window in orderto inform the user that it is the time to open the window. In this scenario, Intilleraised two important issues that are closely related to our research (and which wediscuss in more detail in Sections 6.2.3.1 and 7.4):

• There is an inherent tension between the amount of control the user has overa system and the benefits she may enjoy from proactive services.

• The suggestion to be presented to a user should not divert the user’s attentionfrom their current task.

With regard to these issues, Intille (2002) argues that the complexity of algorithmsfor making a decision (e.g. opening the window) may prevent the user from under-standing the behaviour of the system and could therefore be perceived as “unex-plainable intelligence”.

The three systems outlined above mark distinct areas within a design spacealong the two dimensions of control and scrutability. Control may vary from com-plete system control to full user control, whereas scrutability varies from low tohigh (see Figure 1).

The IOS, which is the focus of this article, offers high scrutability and allowsthe user to exert a high level of control.

The following systems represent cases of supporting proactive behaviour basedon some model of the user or inhabitants. Mitchell et al. (1994) developed theCalendar APprentice (CAP) application to assist a user with calendar schedulingbased on the user’s scheduling preferences. In particular, through the design of theCAP, the researchers explored the potential of machine learning methods for theimplementation of personal software assistants. Each night, the CAP system runsa decision tree learning algorithm on a chunk of the user’s most recent calendarinformation, in order to refine the set of rules that will be used to provide adviceon the following day. The work described in this article extends the approach

240 KEITH CHEVERST ET AL.

High scrutability

Low scrutability

System control User control

Adaptive House (Mozer and Miller, 1998)

Changing Places/House_n (Intille, 2000)

Intelligent Office System

EasyLiving Project (Brumit et al., 2000)

Figure 1. Two-dimensional design space spanning control and scrutability dimensions.

presented by Mitchell et al. by using fuzzy decision tree learning to provide userswith both user-confirmed and fully automated proactive behaviour for multiple ser-vices.

Barkhuus and Dey (2003) report on findings from a study in the context ofa mobile scenario, where they investigated the relationship between user controland service automation. In their particular setting, mobile phone users were willingto give up control for a number of (future) services such as tracking the loca-tion of friends or recommendation of nearby restaurants at lunch time. Althoughtheir study relied on the participants to imagine their usage patterns if such a ser-vice was available, one of their main findings was that users were willing to giveup control if the benefits (i.e. the convenience or added value) of doing so washigh.

However, there may be further, less obvious, drawbacks to giving up con-trol. For example, Intille and Larson (2003) cite Rodin and Langer (1977) topoint out that “lack of control over aspects of life has been shown to dimin-ish health”. This is the main reason why, in the Changing Places/House n pro-ject, they decided to provide subtle clues to the user to perform an action (e.g., aglowing light switch to suggest turning on the light) instead of a fully automatedsystem.

In this special issue, Carmichael et al. describe their accretion-resolution usermodeling representation, which enables a user to scrutinise her user model, theprocesses that determine its content, and the way that it is used in the ubiquitouscomputing environment. Their approach is also capable of representing other enti-ties (e.g., sensors), and resolving (by assigning reliability metrics) the fact that cer-tain sensors (e.g., location sensors) may provide conflicting evidence regarding theuser’s context at a given time.

EXPLORING ISSUES OF USER MODEL TRANSPARENCY 241

2.2. interuptability

A further issue in the context of proactive behaviour is the kind and impact ofthe interruption that occurs when a system performs a proactive action or raisesthe user’s awareness of a planned proactive action.

There is a negative relationship between interruption frequency and perfor-mance on complex tasks (Speier et al., 1997). In addition, switching between taskscomes with a time cost that increases with task complexity and unfamiliarity(Rubinstein et al., 2001). McFarlane and Latorella (2002) suggest that timing ofan interruption must be context-sensitive.

There appear to be individual differences in the ability of people to accommo-date interruptions, and this fact must be accounted for in an intelligent system(McFarlane and Latorella, 2002). Therefore, the adaptivity of such a system shouldbe tuned to a user’s profile in terms of her behaviour patterns and her ways ofhandling interruptions. The individual differences in terms of sensitivity to differ-ent modalities appear to have greater impact on the disruption caused by interrup-tions than the interruption modalities themselves, i.e. heat, smell, sound, vibration,light (Arroyo et al., 2002).

McFarlane and Latorella (2002) suggest various methods for responding tointerruptions and discuss the following four basic solutions for coordinating inter-ruptions: (i) as soon as the system identifies the need for proactive behaviour(immediate interruptions); (ii) at a moment specified by the user after previousnegotiation with the system (negotiation); (iii) at a moment decided on by thesystem as appropriate for interruption (mediated); or (iv) all the interruptionscan occur at once at a previously arranged moment (scheduled). The findings inMcFarlane and Latorella (2002) obtained through studying the effects of interrup-tion on tasks carried out on a desktop computer suggest that negotiation is thebest solution in terms of task performance for almost all situations, but for thosewhere the timing for handling the interruption is critical, the best solution is theimmediate interruption.

An excellent analysis of the critical factors surrounding the effective integrationof automated services with direct manipulation interfaces is provided by (Horvitz,1999). It includes the challenge of decision making under uncertainty (e.g., aboutthe user’s current goals), the need to consider the status of a user’s attention inthe timing of services, and “employing dialog to resolve key uncertainties” while“considering the costs of potentially bothering a user needlessly”.

3. Approach

3.1. overview of the technical approach

In order to ascertain the feasibility of supporting modelling-based proactive adap-tations in a user’s office environment, our current work has involved the designand implementation of a system to:

242 KEITH CHEVERST ET AL.

• Utilise context history in order to learn the patterns of the user’s behaviourin her physical office environment.

• Support modelling-based proactive adaptations (opening/closing the window,turning on/off the fan, etc.) based on both the patterns learned (representedas a set of rules) and the state of the physical office environment (realisedthrough a set of sensors).

The contexts considered in the experiment are: temperature, humidity, noise level,light level, the status of the window, the status of the fan, and the location ofa user. Our system collects and accumulates the contexts as a context history.An early version of the system required the user to notify the system explic-itly of instances when she turned on/off the fan or opened/closed the window inher office. Unsurprisingly, initial user trials showed that this task was simply tooarduous for the user (and hardly in the spirit of ubicomp). Therefore, we haveredesigned the system to collect context automatically from the user’s physical envi-ronment if she has the appropriate sensors available.

Once a context history has been gathered, a set of human comprehensible rulesis induced which represent the user’s preferences for the activation of her fan andheater. On the basis of these rules, our system can provide a suggestion to the userwhen the environmental context in the user’s office changes (e.g., if the tempera-ture rises above a given threshold value). It is important to note that, whenever,the system suggests an adaptation (e.g., “Shall I turn on the fan?”), the user candismiss the suggestion.

An actuator based on X10 (X10, 2004) technology is used to turn on/off theuser’s heater, fan or lamp. An early version of the system required the user to‘imagine’ that the system was capable of turning on the fan when the physical con-text conditions were met. Again, we found that this approach was far from suitablefor enabling us to gain an appreciation of the real reaction that a user would haveto the system actually turning on her fan and highlights the importance of usingworking prototypes when developing this kind of interactive ubicomp application.

Our system comprises a database with several tables, storing raw context his-tory, symbolised context history and both learnt and user-specified rules. As illus-trated in Figure 2, the system also consists of a context manager, an inferenceengine, and an adaptation manager.

The context manager collects context from sensors (Byun and Cheverst, 2004).If within a period of one minute there is at least one change in a raw context value

Figure 2. The design for providing dynamic adaptations.

EXPLORING ISSUES OF USER MODEL TRANSPARENCY 243

(e.g., if the temperature changes from 24 to 25 ◦C, or the user turns on/off the fan),then the context manager generates a “context changed” event and stores the con-text in the context history. That is, while no state changes occur, the size of storedcontext history remains constant. The context manager also performs the symbol-isation of raw context values.

The adaptation manager listens for the “context changed” events generated bythe context manager. Next, the adaptation manager decides which adaptation mustbe made on the basis of both the current context (e.g., the current temperature)and the user model (which contains the set of learnt or predefined rules that rep-resent the user’s preferences).

The inference engine is responsible for extracting rules based on the context his-tory and placing these in the user model.

3.2. selection of a decision-tree-based learning algorithm

In the research field of user modelling, machine learning is actively investigatedas a practical method to learn a user’s interests, preferences, knowledge, goals,habits, and/or other properties in order to adapt the services to the user’s indi-vidual characteristics (see, for example Mitchell et al., 1994; Pohl, 1996). As wasfirst described in the introduction section, our primary requirement for our sys-tem was that it should facilitate comprehensibility in order for the user to be ableto understand what is being done by the system and why the system is doing it(i.e., to scrutinise the behaviour of the system). This overriding system requirementimplied selecting a learning algorithm that could provide the user with an explicitand understandable explanation for proactive behaviour. Generating rules that arereadily intelligible by humans is one of the advantages of decision tree learning(Mitchell et al., 1994). Thus, at an early design stage, we chose to adopt a deci-sion tree learning approach (Byun and Cheverst, 2003).

3.3. use of fuzzy logic

While a decision tree is accessible to an average human user, the underlying modelof crisp cut points (see the upper image in Figure 3) does not quite match humanthinking. To alleviate this problem without sacrificing scrutability of the inferenceprocess, we chose to use fuzzy logic. Instead of crisp boundaries between catego-ries, fuzzy logic introduces a membership function, which reflects how well a givenvalue falls into a category (see the lower image in Figure 3). On the basis of afuzzy representation of context (Mantyjarvi and Seppanen, 2002), we use fuzzydecision trees (Janikow, 1996) for inference (see also Zeidler and Schlosser, 1996;Shiu et al., 2000; Guetova et al., 2002). Our rationale for the use of a fuzzy deci-sion tree is its potential to express symbolically the rules governing the system’sproactive behaviour. In turn, this representation supports users’ understanding ofthese rules, which provides a basis for scrutinising system behaviour. Enabling

244 KEITH CHEVERST ET AL.

Figure 3. Conventional (top) and fuzzy (bottom) representations of temperature.

users’ access to a symbolic representation of these rules (i.e. rule visualisation)supports users in developing both a structural (“how the system works”) and afunctional (“how the system can be used”) model of the system.

Figure 3 illustrates conventional and fuzzy representations of temperature.Using the example partitions shown in this figure, the temperature 26 ◦C would bediscretised into the categorical value ‘hot’ with the membership value 0.75, and thetemperature 28 ◦C would be discretised into the same categorical value but withthe membership value 1. Consequently, even though the two temperatures 26 and28 ◦C have the same categorical value ‘hot’, they have different membership val-ues in accordance with people’s natural way of understanding such categorisations.Therefore, we would argue that this fuzzy representation improves the communica-tion between users and our system. Note that the cut points shown in Figure 3 arefor illustration only and that the system we present later in this article provides ameans for the user to specify their own cut points if they so desire.

The membership functions of the fuzzy sets, in conjunction with informationgains, are used to build a fuzzy decision tree. We will illustrate this process usingthe small set of context history (in the form of continuous values) shown in Table I(note that this same history is also used in the user interface walkthrough pre-sented in Section 5). Table II shows the discretised set of context history based onTable I and the cut points for each attribute that is defined in our prototype sys-tem (the actual cut points used are those shown in Figure 15). For example, in the

EXPLORING ISSUES OF USER MODEL TRANSPARENCY 245

Table I. A continuous-valued raw context history

Nr. Date Time Temp Noiselevel Humidity Light Window Fan Heater

1 2004-26-11 14:36:01 23 55 30 52 Closed Off Off2 2004-26-11 14:37:01 24 55 30 49 Closed On Off3 2004-26-11 14:38:01 25 55 30 51 Closed On Off4 2004-26-11 14:39:01 26 55 30 50 Closed On Off5 2004-26-11 14:40:01 22 68 30 50 Closed Off Off6 2004-26-11 14:41:01 22 62 30 50 Closed Off Off7 2004-26-11 14:42:01 21 55 30 49 Closed Off On8 2004-26-11 14:43:01 20 55 30 50 Closed Off On9 2004-26-11 14:44:01 18 55 30 50 Closed Off On

10 2004-26-11 14:45:01 19 76 30 50 Closed Off On

Table II. A discretised context history including membership details of the Fan rule

NoiseNr. Date Time Temp Level Humidity Light Window Fan Heater Membership

1 2004-26-11 14:36:01 Mild Loud Normal Normal Closed Off Off 0.62 2004-26-11 14:37:01 Mild Loud Normal Normal Closed On Off 0.453 2004-26-11 14:38:01 Hot Loud Normal Normal Closed On Off 0.34 2004-26-11 14:39:01 Hot Loud Normal Normal Closed On Off 0.455 2004-26-11 14:40:01 Mild Loud Normal Normal Closed Off Off 0.66 2004-26-11 14:41:01 Mild Loud Normal Normal Closed Off Off 0.67 2004-26-11 14:42:01 Mild Loud Normal Normal Closed Off On 0.458 2004-26-11 14:43:01 Mild Loud Normal Normal Closed Off On 0.39 2004-26-11 14:44:01 Cold Loud Normal Normal Closed Off On 0.6

10 2004-26-11 14:45:01 Cold Loud Normal Normal Closed Off On 0.45

case of the first row, the temperature 23 ◦C is discretised to ‘mild’ with membership1 on the basis of the cut points in Figure 15. Other continuous-valued attributescan be discretised in the same manner:

NoiseLevel is ‘loud’ with membership 1,Humidity is ‘normal’ with membership 0.6,Light is ‘normal’ with membership 1.

In the case of the Window attribute, the membership value of ‘closed’ is consid-ered to be 1, because it is not a continuous-valued attribute. In order to capturethe overall extent to which the predictive attributes have the specified values, wedefine a membership value for the rule as a whole (see Table II) in terms of theproduct of the membership values of all of the attributes on the left-hand side:

χTemp(mild)χNoiseLevel(loud)χHumidity(normal)χLight(normal)χWindow(closed)

= 1.0 · 1.0 · 0.6 · 1.0 · 1.0

= 0.6

246 KEITH CHEVERST ET AL.

3.3.1. Calculating the Root Node for the Fuzzy Tree with the Fan target function

The first step in building a fuzzy decision tree is finding the root node. For thispurpose, the information gain for each attribute A must be calculated. In the fol-lowing calculation, S refers to the whole set of context history and the base ofthe logarithm is 2. The formulas for the calculation of entropy and informationgain are essentially the same as those for building a conventional decision tree.However, in the case of a fuzzy decision tree, the membership values of the tar-get attribute (for example, the first row membership of Fan is 0.6) should be usedin the calculation. The corresponding formulas for the entropy E(S) and the infor-mation gain G(S, A) are as follows:

E(S) = − mon

mFanlog

mon

mFan− moff

mFanlog

moff

mFan

where

mon =∑

s∈S

χFan(on) sum of all membership values for Fan = on

moff =∑

s∈S

χFan(off) sum of all membership values for Fan = off

mFan = mon + moff

G(S, A) = E(S) − mx

mAE(S, A = x) − my

mAE(S, A = y) − mz

mAE(S, A = z)

where

mx =∑s∈S

χA(x) sum of all membership values for A = x

my =∑s∈S

χA(y) sum of all membership values for A = y

mz =∑s∈S

χA(z) sum of all membership values for A = z

mA = mx + my + mz

Therefore, the entropy of the whole set and the information gain of Temp can becalculated from the membership values shown in the last column of Table II asfollows:

mon = 0.45 + 0.3 + 0.45 = 1.2

moff = 0.6 + 0.6 + 0.6 + 0.45 + 0.3. + 0.6 + 0.45 = 3.6

E(S) = −1.24.8

log1.24.8

− 3.64.8

log3.64.8

= 0.813

E (S, Temp = hot) = −0.3 + 0.450.3 + 0.45

log0.3 + 0.450.3 + 0.45

− 00.3 + 0.45

log0

0.3 + 0.45= 0

EXPLORING ISSUES OF USER MODEL TRANSPARENCY 247

E (S, Temp = mild) = − 0.450.6 + 0.45 + 0.6 + 0.6 + 0.45 + 0.3︸ ︷︷ ︸

=3.0

log0.453.0

−2.553.0

log2.553.0

= 0.61

E (S, Temp = cold) = − 00.6 + 0.45

log0

0.6 + 0.45

− 1.050.6 + 0.45

log1.05

0.6 + 0.45= 0

G(S, A) = E(S) − 0.754.8

E(S, Temp = hot) − 3.04.8

E(S, Temp = mild)

−1.054.8

E (S, Temp = cold)

= 0.43

In the case of the attributes NoiseLevel, Humidity, Light, and Window, allinformation gains are zero since they have only one attribute value (for example,all values of NoiseLevel are ‘loud’). According to the information gain measure,Temp would provide the best prediction for target attribute Fan. Therefore it isselected as the root node.

Now let us consider a subset of the context history that contains the caseswhere temperature is mild (row numbers 1, 2, 5, 6, 7, 8). In this case, the infor-mation gains of all remaining attributes are also zero, because they have only oneattribute value. Therefore, we can create a leaf node here. The membership valuesof ‘on’ and ‘off’ in the leaf node can be calculated by normalizing the target mem-bership values in the subset: the membership of ‘on’ is:

mTemp = mildon

mTemp = mild= 0.45

0.6 + 0.45 + 0.6 + 0.6 + 0.45 + 0.3= 0.15

mTemp = mildoff

mTemp = mild= 0.6 + 0.6 + 0.6 + 0.45 + 0.3

0.6 + 0.45 + 0.6 + 0.6 + 0.45 + 0.3= 0.85

where

mTemp = mildon =

∑χFan(on)

s∈S|Temp = mildsum of all membership values for Fan = on

mTemp = mildoff =

∑χFan(off )

s∈S|Temp = mildsum of all membership values for Fan = off

mTemp = mild = mTemp = mildon + mTemp = mild

off

Since the membership value of ‘off’ for this node is greater than the one for ‘on’,we can extract the rule ‘if Temp = mild and NoiseLevel = loud and Humidity =normal and Light = normal and Window = close, then Fan = off with member-ship 0.85’.

248 KEITH CHEVERST ET AL.

(a) (b)

Figure 4. (a) A fuzzy decision tree for Fan (see section 3.3.1. for justification of values shown innodes). (b) A conventional decision tree for Fan.

In the cases of the subset of context history for Temp = ‘hot’ and Temp = ‘cold’,the information gains of all remaining attributes are zero for the same reason asin the above case, and all target values are the same (‘off’). Therefore, we can cre-ate leaf nodes here without normalizing the target membership values. The rulesfrom these two cases are: ‘If Temp = hot, then Fan = on with membership 1’ and‘If Temp = cold, then Fan = off with membership 1’. Now we can draw a fuzzydecision tree as shown in Figure 4a. In contrast, if a new nonfuzzy decision treealgorithm were used, the tree produced would appear as that shown in Figure 4b.Please note that the peculiar shape of the decision trees is a result of the smallsize of the dataset used in the example. The trees generated by the actual systemare more complex, having a higher branching factor on each level.

EXPLORING ISSUES OF USER MODEL TRANSPARENCY 249

3.3.2. Illustration of Adaptation Procedure based on the Fuzzy Tree with the FanTarget Attribute

In this subsection, we present an actual example for how adaptation is performedby our prototype system. The example is based on the fuzzy decision tree shownin Figure 4a, which was derived from the small sample of context history shownin Table I. Let us assume that the environmental conditions in the user’s office areas follows:

Temperature is 26 ◦C, NoiseLevel is 55 dBA, Humidity is 29%, Light is 68 lux,and Window is closed.

The first step of adaptation in this new situation is the fuzzification of eachcontext attribute. The corresponding membership functions are shown in the boxesbetween the nodes in the fuzzy decision tree shown in Figure 4a. For example,according to the box shown just below the ‘NoiseLevel’ node, 55 dBA would becategorised as ‘loud’ with membership 1. We can then compute the value of themembership function χ for each value of the target attribute Fan in each leaf node(A, B, C):

Leaf A:

χ (Fan = on) = χA(Fan = on)χ(Temp = hot) = 1 · 0.75 = 0.75

χ (Fan = off) = χA(Fan = off)χ(Temp = hot) = 0 · 0.75 = 0

Leaf B:

χ (Fan = on) = χB(Fan = on)χ(Temp = cold) = 0 · 0 = 0

χ (Fan = off) = χB(Fan = off)χ(Temp = cold) = 1 · 0 = 0

Leaf C:

χ (Fan = on) = χC(Fan = on)χ(Temp = mild)χ(NoiseLevel = loud)

·χ (Humidity = normal) χ (Light = normal)χ(Window = closed)

= 0.15 · 0.25 · 1.0 · 0.5 · 0.5 · 1.0 = 0.01

χ (Fan=off) = χC(Fan = off)χ(Temp = mild)χ(NoiseLevel = loud)

·χ (Humidity = normal) χ (Light = normal)χ(Window = closed)

= 0.85 · 0.25 · 1.0 · 0.5 · 0.5 · 1.0 = 0.05

Therefore, the total membership values for ‘Fan = on’ and ‘Fan = off’ are:

χ(Fan = on) =∑

A,B,C

χ(Fan = on) = 0.75 + 0 + 0.01 = 0.76

χ(Fan = off) =∑

A,B,C

χ(Fan = off) = 0 + 0 + 0.05 = 0.05

250 KEITH CHEVERST ET AL.

Note that each leaf node (the categorical value of the target function: on and off)has a membership value that stems from the membership values of context historydata. In order to choose the value of the target function, the membership valuesof leaf nodes and the membership values of current contexts are multiplied andadded for the same target values. Finally, the target value that has the largest totalis selected as a suggestion, i.e. ‘on’ with certainty level 0.76 for this proactive ser-vice. Therefore, our system would turn on the fan (or suggest turning on the fan,depending on the user’s preferences) with a confidence level of 0.76 (provided thatthis value was higher than the threshold that determines whether proactive behav-iour should occur).

3.3.3. The Fluctuation Problem

In any control system that controls part of an environment populated by humanusers, fluctuations around a cut point pose a problem. For example, if a fan isturned on whenever the temperature is higher than 25 ◦, it might happen that thesystem continuously turns the fan off and on if the temperature fluctuates around25 ◦. While fuzzy logic does not eliminate those problems per se, it is possible toapply certain strategies to address fluctuations.

Perhaps the simplest approach to introduce a further action “leave unchanged”for the right-hand side of rules. For example, if the rules specify that the fan is tobe turned off under 24 ◦, to be turned on above 26 ◦, and to be left unchanged oth-erwise, fluctuation on the basis of small temperature changes cannot occur. If, forexample, the fan is on because the temperature is 27 degrees, it will not be turnedoff again until the temperature drops below 24 ◦.

In another approach, the time interval since the last change in the state of thedevice could be made a feature to be considered in the left-hand side of the rules.In that case, for example, the system might learn not to make any change soonerthan 5 min after the previous change.

3.4. evaluation methodology

Given our interest in both the technical and human factors issues associatedwith the development of a system supporting proactive behaviour in an intelligentoffice environment, our evaluation methodology has been to start our explorationwith the development of a simplified but working prototype system that can bedeployed at a relatively early stage. This early deployment has enabled user stud-ies to commence and provide valuable feedback with regard to user acceptance ofthe IOS concept and to drive future developments; in this respect we are adoptinga user-centred design approach. In order to accelerate exploration of issues whereappropriate, our evaluation methodology also includes the use of questionnairesto focus on specific issues that are revealed by user feedback from the prototype.For example, after the use of the prototype revealed issues about how users would

EXPLORING ISSUES OF USER MODEL TRANSPARENCY 251

prefer to visualise inferred rules and the confidence values associated with thoserules when scrutinising the behaviour of the system, a questionnaire was used toenable a relatively quick and low-cost investigation into this issue (the results ofthis particular aspect of our investigation are described in Section 6.2.3.2). How-ever, as is argued below, the danger with this approach is that such responses arenot provided ‘in situ’ and that therefore such results must be judged with a certaindegree of scepticism. Our intention is for our current user studies to develop intolongtitudinal studies that should provide much-needed insights into the longer-term implications of using and maintaining the IOS. Given the relatively short-term nature of our studies, so far we can claim only to provide formative resultsthat may point to directions for further study.

4. The Hardware and Software Components of the Intelligent Office System

4.1. hardware components

The hardware components of the system are required for sensing physical contextin the user’s office and for actuating supported devices. Actuation (e.g., turning adevice on or off) occurs if a rule learnt by the system is triggered by the currentcontext (and the user has specified that actuation can occur without user verifica-tion) or if the user chooses to use the system’s graphical user interface to instructthe system to turn a given device on or off. The main hardware components of thesystem comprise two different sensors (the ‘off-the-shelf ’ DrDaq data logger and apurpose build current sensor) and the actuation system (based on the X10 system).The following subsections describe these components in more detail.

4.1.1. The DrDaq Data Logger

In our experiment, we use a DrDAQ data logger from Pico Technology Ltd (seeFigure 5) in order to obtain the state of the user’s office environment. The DrDAQdata logger has built-in sensors for detecting various environmental contexts (e.g.,light level, sound level and temperature) and two external sensors. One of theseexternal sensors is used to sense whether or not the user’s window is currentlyopen, while the other one is used to sense humidity.

It is important to note that the accuracy of temperature sensed by the DrDAQis 2 ◦C at around 25 ◦C (the error margin depends on the temperature). If thethreshold between ‘hot’ and ‘mild’ is 25 ◦C, then this 2 ◦C error margin couldclearly be very significant (e.g., the same true temperature could be sensed as either‘hot’ or ‘mild’). One approach we plan to investigate is the setting of appropriatethresholds to determine the change required in a given raw sensor value before anew value is recorded in the context history. This problem is similar in nature tothe problem of reacting to apparent changes in location sensed by GeographicalPositioning System (GPS) in which the inaccuracy can be in the region of 10 m.

252 KEITH CHEVERST ET AL.

Figure 5. The Current Sensor (left), the Current Sensor attached to a user’s fan (center) and theDrDaq Data Logger (right).

As was mentioned previously, the DrDaq module is used for detecting thestate of the window. This task is accomplished with an external reed switchsensor that is positioned on the window frame and a small magnet locatedon the moving part of window. When the window is closed, the magnet sitsvery close to the reed switch, causing it to open, and when the window isopened, the magnet moves away from the reed switch, causing it to close – thesechanges are recognised by the DrDaq module and sent to the Intelligent OfficeSystem.

4.1.2. X10-Based Actuation

In order to achieve actuation of devices, we have utilised X10 technology. It isnot possible to send X10 signals through the internal power wiring of the univer-sity Computing Department in which our experiments have been taking place, soa trivial but highly practical solution has simply been to plug the fan and X10controller into the same gangway extension (see Figure 5). Using this approach,the X10-based control signals need only travel successfully along the length of thegangway extension.

Unfortunately, the reliability of standard X10 signalling cannot be guaranteedor even verified in many cases, so to be sure of the status of a device, we havedeveloped the current sensor (i.e., the sensor for detecting changes in electricalcurrent) described in the following section.

EXPLORING ISSUES OF USER MODEL TRANSPARENCY 253

4.1.3. Current Sensor

Early user trials revealed that typical users tend to be reluctant to click manu-ally on a check box in a dialog box every time they change the state of the fanin their office in order to update their context history. Consequently we set aboutdeveloping a system that would enable the action of turning on a device to beautomatically detected. The current sensor that we developed (specifically for thedeployment of the IOS) is able to detect whether the fan, or any other standardelectrical appliance connected to the current sensor, is switched on or off at anygiven time. It does this by measuring the amount of current that passes throughthe mains power cable. A magnetic coil wound around the mains line produces avoltage whenever the appliance is turned on, which is proportional to the amountof current drawn by the appliance. This voltage is continuously sampled using ananalogue-to-digital (A/D) converter and a PIC microcontroller. Software runningon the PIC then detects transitions between the ’ON‘ and ‘OFF’ states of theappliance on the basis of the output of the A/D converter. The current sensor inFigure 5 is based on a Smart-Its (Holmquist et al., 2001) prototyping device. Itprovides an RS232 and a wireless radio interface to other devices. This arrange-ment allows direct interfacing with the IOS running on the PC.

4.2. software components

The database server used is the open source product MySQL� (www.mysql.com).In addition to MySQL’s excellent reliability and ease of use, its open source natureenables the entire IOS software to be made available for experimentation by otherresearch groups free of charge.

Referring back to the overall architecture diagram illustrated in Figure 2, the‘Context Manager’, ‘Inference Engine’ and ‘Adaptation Manager’ components ofthe software are all written in Java and are designed to be compiled and run underany Java 2 environment. Additionally, MySQL provides an official Java DataBaseConnectivity (JDBC) driver, which was used throughout the system.

A significant portion of the software code is user interface code. The followingsection describes in detail the user interaction supported by the system through itsgraphical user interface.

5. User Interface Walkthrough

In order to help illustrate the behaviour of the system, we have generated rules thatare consistent with a sample piece of context history. We will now show what behav-iour would be triggered on the basis of hypothetical environment values such astemperature.

The current GUI is based on a main control window that is designed to run ona display that is separate from the user’s primary desktop display (or displays), asis shown below in Figure 6.

254 KEITH CHEVERST ET AL.

Figure 6. The typical deployment of the control system (on a separate display).

The Main Control window to the IOS is shown in Figure 7. This GUI is dividedinto four parts which correspond to different aspects of the functionality. TheDevice Control part (in the left-hand two-thirds of the window) enables the user toturn on and off devices in the office, to set preferences for fully automated behav-iour, and to view associated rules. The current state of a device is shown throughtext and through greying out of the text on the button that is not needed. TheCurrent State of Environment part (top right-hand part of window) shows the val-ues currently being read from the DrDaq sensor and also presents these values interms of their main fuzzy classification (in order to help reinforce the user’s under-standing of the relationship between numeric values and classifications). An earlierversion of the software did not contain this information, and it was only addedfollowing requests from a number of users.

The Setting Location part (the centre right-hand part of the window) is the oneplace where the system still does require the user to inform the system manuallyof a context event (i.e., whether the user is in the office or not); we are currentlyinvestigating a range of techniques for automatically sensing this information withthe high degree of accuracy required. Finally, the Intelligent Office Functions part(the bottom right-hand part of window) is the primary part of the GUI forenabling the user to control the behaviour of the IOS, including the ability to setthe proactive threshold to an appropriate level (including ‘off’) and to view thecontext history.

EXPLORING ISSUES OF USER MODEL TRANSPARENCY 255

Figure 7. The Main Control GUI.

The following two sections describe the Device Control and Intelligent Office Func-tions parts of the main GUI in more detail.

5.1. device control

The Device Control part of the window is used to enable the user to turn on oroff devices and it is intended to be the primary means for users to control devices.We have found this part of the window to be much used in practice, largely dueto the convenient placement of the touch screen display (as shown in Figure 6). Amajor benefit of this approach is that context relating to the turning on and off ofdevices can be automatically captured and stored in the context history.

5.2. when a rule for device actuation is triggered

When a suggestion prompt is issued (which occurs if the user has indicated that aprompt rather than automatic action is required) it is displayed on the main controlGUI. For example, if the system suggests that the fan should be turned off, thenthe UI changes to that shown below in Figure 8 and the text on the ‘OFF’ buttonflashes black and white. Note that in this example, the user is shown the confidencelevel of the rule as a categorical value, in this case ‘High’ (see Section 6.2.3.3 forquestionnaire results regarding user’s preferences for visualising confidence levels).

256 KEITH CHEVERST ET AL.

Figure 8. Display of a system prompt.

However, by altering their preferences a user can also choose to view the confidencelevel as a decimal number, e.g. 0.85, if she prefers.

The user will not always want the system to offer a prompt before turning ona device. For this reason, the GUI presents the user with tick boxes for allowingproactive actions to occur automatically. Given the selection of tick boxes shownin Figure 8, if a rule (with a high confidence level as per the user’s preferences)became triggered for turning on the user’s fan, the fan would automatically beturned on and the system would display the following message: “I have just turnedon the fan for you (confidence level: High)”. This facility was incorporated on thebasis of feedback from users of the system and the results of the questionnairestudy (see Section 6.2.3.1).

If the user wishes to enquire why that suggestion was made (i.e. scrutinise thesystem), she can press on the appropriate button in order to view a window suchas the one shown below in Figure 9. Please note that for the purposes of this walk-through we are using only a simple set of rules; in practice more complicated rulescan be generated.

Note that at this point the user also has the ability to stop using the rulethat generated the suggestion being used again by selecting the ‘Do Not Use TheSelected Rule Again’ button. If the user wishes to scrutinise the actual context his-tory associated with a given rule, she can do so by selecting the rule and clickingon the appropriate button. An example of the kind of output produced is shownin Figure 10.

The user can also choose to view all the rules associated with a given appli-ance at any time by pressing the ‘View Associated Rules’ button on the appropri-ate ‘Device Control’ part of the Main Control GUI. The rules associated with the

Figure 9. Scrutinising the rule behind a prompt to turn off the fan.

EXPLORING ISSUES OF USER MODEL TRANSPARENCY 257

Figure 10. Scrutinising the context history associated with a given rule.

Figure 11. Scrutinising the rules associated with the fan.

fan, which are based on the context history shown in Tables I and II, are shownin Figure 11. Note that the values shown in the ‘FanMem’ column in Figure 11effectively provide a confidence level for each rule.

5.3. setting the threshold for proactive behaviour

The user is able to control the threshold at which proactive actions occur by click-ing on the appropriate radio button. For example, if the user wishes actions witha confidence level less than High to be ignored, then she would click on the Highsetting. This function was added on the basis of the feedback from users testingthe system in their offices and the results of the questionnaire study. The user canalso turn off any proactive behaviour using this part of the GUI.

5.4. viewing the context history

As was described in Section 2, fundamental to our approach is the notion ofenabling the user to understand (or at least be able to scrutinise) the rules behindthe system’s behaviour. For this reason, and on the basis of the results of thequestionnaire, we have enabled the user to view her context history. The usercurrently has two choices for viewing context history, either as raw data withnumerical values, as shown in Figure 12, or, symbolic data with symbolic values,as shown in Figure 13.

258 KEITH CHEVERST ET AL.

Figure 12. Screenshot illustrating a 10 row sample of ‘raw’ context history.

Figure 13. Screenshot illustrating a 10-row sample of ‘symbolic’ context history.

For reasons of simplicity, we have deliberately shown an amount of context his-tory that is very small but large enough to convey the idea and to generate therules shown in Figure 11.

5.5. setting preferences and user-defined rules

The IOS enables users to change a variety of settings or preferences (e.g. theboundaries used to separate different confidence categories used by the system). Inaddition, the system enables users to add their own rules (an example of which isshown in Figure 14) and to access and modify the ‘cut-points’ file, an example ofwhich is shown in Figure 15.

5.6. initiating the learning process

By pressing the Learn Rules button on the Main Control GUI (shown in Figure 8)following the collection of a reasonable amount of context history, the user canrequest the system to start inferring rules from her context history. A few hours ofcontext history can be sufficient; but the greater the size of context history used,the higher the system’s ability to capture the nuances of the user’s behaviour.

EXPLORING ISSUES OF USER MODEL TRANSPARENCY 259

Figure 14. A user interface for enabling the user to add her own rules – the generated rule is shownat the bottom of the window.

At the design stage, we needed to consider whether to adopt an on-line or anoff-line learning strategy. An on-line strategy would construct a user model incre-mentally in real-time, while an off-line strategy builds or updates the user modelas a kind of batch job. Because of the high computational resources required torun an on-line learning system, the current version of the IOS utilises an off-lineapproach to learning.

Although we did not anticipate that the ability to scrutinise context historywould be particularly popular with a great many nontechnical users, the resultsof the questionnaire (Section 6) indicated that many nontechnical users do desiresuch a facility. Furthermore, we believe that the user’s trust in the system can befostered with this glass box (Karlgren et al., 1994) approach to the user modellingaspects of the system.

6. Current State and Initial Evaluation

This section describes the current state of our prototype deployment (Section 6.1)and the results of our questionnaire-based study (Section 6.2).

260 KEITH CHEVERST ET AL.

Figure 15. A user interface for enabling the user to access and modify the ‘cut-points’ file.

6.1. current state of deployment

The initial version of the IOS was first deployed in February 2002. Since that time,a number of modifications and improvements to the system have been made aschallenges to our approach have been encountered. Key modifications have beenthe introduction of a fuzzy logic approach in January 2003, the utilisation of X10technology in March 2004, and a complete user interface modification in Octo-ber 2004. In March 2005, the system underwent significant modification in orderto increase overall reliability and ease of installation for nontechnical users. Thesoftware for the system is available on the web (at http://www.comp.lancs.ac.uk/computing/staff/kc/IntelligentOffice.html) so that other research groups can experi-ment with the IOS.

This actual IOS system itself was deployed in the offices of four of the devel-opers for approximately one month, during which the IOS was subject to mod-ification. As is described below, the insights of these four real users were usedto inform the design of our questionnaire-based study, which involved partici-pants with technical and nontechnical backgrounds and which revealed a generally

EXPLORING ISSUES OF USER MODEL TRANSPARENCY 261

positive approach toward proactive systems such as the IOS and provided a vali-dation of the importance of enabling the user to scrutinise the reasons (i.e. rules)behind proactive behaviour.

6.2. an initial formative questionnaire evaluation

The actual deployment of the system revealed many interesting issues, such as theneed to reduce cost (in terms of perceived user effort) and maximise user benefit.It strongly informed the design of our questionnaire study, which sought both toexplore the general attitude of participants towards the IOS and to investigate thefollowing, more specific, issues:

• How much scrutability do users want and what are appropriate ways of sup-porting it?

• How much control do users want, and what should be automatic?

Consequently, the questionnaire study was primarily an explorative one, aiming tolead to hypotheses to be tested in subsequent evaluation studies.

6.2.1. Questionnaire Design

Table III presents a brief description of the various dimensions and associatedvariables which encode the concept of user’s attitude with respect to an intelli-gent control system such as the IOS. Apart from measuring the general attitudetowards such systems, the questionnaire also focuses on capturing two aspects ofgreat interest for our work, namely control and scrutability.

The questionnaire contained 52 items with a 5-point Lickert scale and 13 openquestions which enabled participants to make comments on the relevant issues. Theuser profile was captured through questions that called for standard factual data.

For the benefit of subjects who had not had experience with the IOS, a descrip-tion of the system was given, along with examples of screen shots and scenariosdescribing how the system would behave under different circumstances.

6.2.2. Participants

The nonrandom sample comprised of 30 subjects: 26 potential users of the sys-tem and four actual users. Among them were 25 males and 5 females. 63% of theparticipants were younger than 35 years old. The 30 subjects differed greatly in theextent of their hands-on experience with systems like the IOS; the four subjects whohad actually used the IOS represent one extreme, while the other extreme was rep-resented by subjects who had no experience even with comparable systems. Indeed,while 40% of participants declared themselves to have had previous experience with aproactive system, 33% were not sure about what exactly constitutes a ‘proactive sys-tem’, a fact indicating that the concept of a proactive system bears some ambiguity.Completing the questionnaire required between twenty minutes and one hour.

262 KEITH CHEVERST ET AL.

Table III. Concept operationalisation regarding user attitudes towards an intelligent control systemused as a basis for questionnaire design.

Dimensions Variables Items

General attitudetowards intelligentcontrol systems

Acceptance Previous experience with suchsystems, (e.g., automatic doors,Microsoft Office Assistant) indifferent environments (i.e.,home, work)

Error tolerance Tolerance of wrong decisionsmade by the system during/afterthe training phase; willingnessto allow the system to learn byaccepting system errors duringlearning

Market potential Intention to buy, estimatedperceived value

Control andinterruption

User control Perceived importance of: abilityto amend system decisions,system prompts before carryingoutactions or without carrying outactions, ability to change betweenthe system working automaticallyor with a prompt, availability forbasic and expert views

Comprehensibilityand scrutability

Visualisation (of rules) Evaluation of: IF-THEN ruletables, text-based formulationof rules, decision trees, usageof colour within decision trees,context history

Confidence level (associated withrules)

Preferred displayed format forconfidence values, need to beable to choose the format, needto be able to set the confidencethreshold for proactive actionsand system prompts

6.2.3. Results

In the following discussion, we report the results of statistical tests in order to con-vey an idea of the likelihood that the results in question arose by chance alone.The specific probabilities reported should not be taken at face value, because ofthe large number of tests that were performed (including some whose results arenot reported here because conventional levels of statistical significance were notattained). Instead, the results of the tests are intended to give an indication ofwhich hypotheses deserve more systematic testing in the future.

The presentation of the evaluation study results is organised according to thedimensions and variables involved in the questionnaire design.

The majority of subjects reported being frustrated when a proactive system fails:for automatic doors 80% and for the Microsoft� Office assistant 50%.

EXPLORING ISSUES OF USER MODEL TRANSPARENCY 263

More than 70% of subjects are happy to use technology at either home or workand feel at ease with it. However, the study revealed that participants differentiatedbetween home and office environments when considering the acceptability of intel-ligent control systems. More specifically, over 66% of participants would be happyto have an intelligent control system deployed at home, while this figure drops toless than 50% for deployment in a work/office environment.

In a work environment, only 13% of participants agreed to let the system learntheir behaviour patterns as a basis for its proactivity, when at home this changedto over 60%. This finding is related to the fact that within a work environment,people are less inclined to accept something which may impede their performance.The reliability of such systems, i.e. the Microsoft Office Assistant, is perceived aslow. This attitude is related to participants’ tolerance for system errors, which isgenerally low, decreasing from 50% during the training phase to 16% after that.Not surprisingly, there is a significant correlation between the degree of perceivedusefulness of Microsoft Office Assistant and the willingness to tolerate systemerrors during the training phase (r(28) = 0.38, p < 0.05).

The perceived market value for such a system depends on system placement:56% of study participants would pay more than 10 pounds for it when placed athome, while only 36% participants were willing to pay more than 10 pounds foran office-based system.

6.2.3.1. Control and Interruptions. More than 90% of participants in the studysample expressed the need for controlling the system (scoring 4 or 5 out of 5).Furthermore, 75% of participants desired the system to prompt before carryingout actions. However, 69% of participants wanted the ability to switch between thehaving the system working automatically or with a prompt. In terms of systemtransparency, 66% of participants wanted access to basic and expert views, the lat-ter providing “ . . . the ability to look at the ‘rules’ which have been generated”.

Starting the system and forgetting about it contradicts the need of being in con-trol so unsurprisingly it was an option that most participants did not approve of(17%).

6.2.3.2. Rule Visualisation. The questionnaire study also sought to explore theparticipant’s preferences for different formats for visualising the rules embeddedin the user mode. In the questionnaire, participants were shown actual examplesof the different types of visualisations before being asked about their preferences.The results revealed that 69% of participants thought (giving a rating of 4 or 5out of 5). that the textual representation of rules (e.g. “I turned on the fan becausethe temperature is hot and the humidity is high”) were a clear and easy way of vi-sualising the rules.

Alternatively, 59% of participants felt that the decision tree was a clear repre-sentation and 62% of participants thought that the use of colour (as illustrated inthe questionnaire) helped with the comprehensibility of the decision tree.

264 KEITH CHEVERST ET AL.

Figure 16. Users’ strong preferences with respect to various forms of visualising the confidence levelassociated with rules governing the system’s proactive behaviour.

Only 10% of users felt that the IF-THEN rule table approach (illustrated in thequestionnaire using a table similar to that shown in Figure 11) was a clear way ofrepresenting the rules.

It was interesting to find that more than 60% of participants held a strong pref-erence for having the option of viewing their context history “occasionally andonly when I request it”.

6.2.3.3. Confidence Level Visualisation. Another way to make a system’s decisionstransparent to its users is by making the confidence level of a suggestion visible(e.g. Figure 8). More than 50% of study participants agreed that the confidencelevel offers a basis for understanding the system and the accuracy of its judge-ments. For this questionnaire study, we considered two modalities of visualisingthe confidence level: decimal format and categorical format, i.e. ‘high’, ‘medium’ or‘low’. In Figure 16 the bars represent the percentage of users who strongly favourdifferent forms for visualising the confidence level of rules embedded in the usermodel (giving a rating of 4 or 5 out of 5).

6.2.4. Exploring Attitudes towards Deployment in Home vs. Office Environments

Participants appeared to be more comfortable having an intelligent control systemat home rather than at work. More participants considered that the system control

EXPLORING ISSUES OF USER MODEL TRANSPARENCY 265

should be diminished in a work environment compared to a home environment.This attitude may be fostered by the users’ need to be more in control in a workenvironment. For example, more than 60% would like to have an automatic lightcontrol in their rooms and only 33% in their office. This attitude is also reflectedin the users’ attitude regarding the perceived market value for such a system.

6.2.5. Exploring the Attitudes of Expert Users

In our study, we found that the subjects who were more experienced in terms ofcomputer knowledge were significantly more interested in specifying a confidencelevel threshold for proactive action (t(22) = 2.20, p < 0.05) and in controlling thesystem when necessary (t(28) = 2.83, p < 0.01). The reduced number of degrees offreedom for the first case is due to missing data.

The need to control the system correlates with the amount of knowledge usershold about computers (r(28) = 0.48, p < 0.01): the higher their level of expertise,the more in control they would like to be.

6.2.6. Qualitative Analysis

The questionnaire design enabled participants to provide comments when answer-ing each of the Likert scale questions. These comments are summarised below.

When commenting about whether the proactive system should prompt beforecarrying out an action or whether the system should simply carry out the actionautomatically, participants commented that this should be task-dependent, giventhe possible negative consequences related to the automatic actions, e.g. “fan blow-ing papers”.

The option of having the system prompting for a short time and then not per-forming the action is usually considered as having little value, particularly “if thecertainty level is high, it makes sense to let the system get on with it”.

One interesting comment made by a participant was as follows: “I don’t thinkthat the user of a washing machine is interested in knowing how the system worksout when to turn the water supply off, but knowing that there is a facility to copewith half loads, i.e. use less water, is useful information”. This comment under-lines the dichotomy between two types of user models: the structural model thatassumes that the user has internalised the structure of how the system works, andthe functional model that assumes that the user has internalised procedural knowl-edge about how a system can be used. Users holding a functional model of anintelligent control system are probably more inclined to accept its learning curveand have a high tolerance to its errors. In fact, study findings suggest that par-ticipants previously exposed to proactive systems are significantly more willing toaccept the system making wrong decisions (t(28) = 1.97, p < 0.05).

We believe that these findings indicate that the designers of systems such as theIOS should anticipate a significant variance in user preferences for different system

266 KEITH CHEVERST ET AL.

features, particularly scrutability and control. One way to successfully accommo-date such a large range of user preferences consists of designing a system able totailor itself to each individual user. However, in practice, this may be difficult toimplement, given the richness and uniqueness characterising each individual user.A possible way to approach this problem, open to future work, consists of identi-fying groups of users which differ greatly in terms of their preferences for systembehaviour, i.e. scrutability and enabled control. Following such an approach couldlead designers to develop a range of different systems, each designed on the basisof the requirements of a distinctive category of user.

7. Discussion and Lessons Learned

We would argue that our current work on the IOS represents one of the firstcase studies of a truly proactive system: a system that combines context-awareness,machine learning and support for scrutability. The following subsections describethe issues explored and lessons learned from our research to date with the IOS.

7.1. benefits and drawbacks of deployment

From our experience in developing and deploying ubicomp/context-aware systemswe are of the strong opinion that real (albeit prototype) deployment is crucial todeveloping insights and highlighting areas for further investigation which can alltoo easily be missed as part of a purely paper study. However, the difficulty ofdeveloping a prototype system that can run continuously 24/7 for many weeks,if not months, should not be underestimated. The current system has undergonenumerous modifications in order to reach a level where the user interface and over-all system reliability are at an acceptable level for deployment with nontechnicalusers. Unforeseen problems can, of course, also occur, For example, the initial sys-tem had been developed to turn on the fan but not to turn on a heater. How-ever, a move to a new building brought cooler conditions causing a slight systemmodification to control heaters also. However, on the positive side, deployment didenable users to make explicit and meaningful choices. The layout of a user’s officehas a significant impact on how she may alter her preferences for the system’sbehaviour. For example, one of the authors has a preference for the IOS to turnher office heater on without prompting, but she does request prompting before thesystem turns on the fan (because she may have papers in front of the fan). How-ever, another user of the system has these preferences reversed.

Furthermore, it has only been through actual deployment that we have come torealise the importance of practical issues such as the need to deploy prototype sys-tems that do not increase office clutter, or additional computer noise. It was alsothrough deployment that we realised the strong preference (amongst the technicalusers involved in the study) for a system that supported touch screen based inter-action.

EXPLORING ISSUES OF USER MODEL TRANSPARENCY 267

7.2. acknowledging the need for strong user benefit to ensure useradoption

Following on from the arguments made in the previous section, we have also foundit crucial when asking users to deploy prototype systems in or around their officesthat they receive a genuine benefit from adopting the given technology. For exam-ple, we experienced this as a key issue when asking work colleagues to cooper-ate in the deployment of a collection of electronic office door displays (Cheverstet al., 2003). With research such as this, it is critical that the cost of using the sys-tem is not too high to outweigh perceived benefits in the short term. The learn-ing approach employed by the IOS relies on the capture of events such as the userturning on a fan, turning off a heater etc. It is simply not realistic to expect a user,even one with a vested interest in the experiment, to go through the trouble of per-forming two tasks in order to turn on a device, i.e. Task 1: turning on the fan andthen Task 2: pressing a button on a GUI in order to let the system record the factthat she has just turned on the fan. This was the case with an earlier version ofthe IOS; in order to reduce this cost we first tried to sense automatically (usingthe sensor described in Section 4.1.3) when a device was turned on (thus removingTask 2). However, this approach alone has some difficulties, because once the fanwas turned off manually, the system would not be able to turn on the fan (usingX10). To counter this problem, we developed a GUI for the system that enabledthe user to turn on the device using a GUI on a separate touch-sensitive display(i.e., the one shown in Figure 7).

7.3. complexity vs. comprehensibility

There is an inherent conflict between the comprehensibility of a system and thecomplexity of the algorithms used to generate the desired behaviour. A simpleapproach may be easy to comprehend but may not produce the desired behaviour.Conversely, a more complex approach achieving accurate results may not be easilyunderstood by the average user.

From our experience with the IOS, we feel that the use of fuzzy decision treesoffer a good compromise between complexity and comprehensibility. While the fulldetails of the low-level computations may be beyond an average user, the result-ing fuzzy decision tree is reasonably easy to understand and most importantlyenables the user to scrutinise the underlying rules that the system has learnt andwhich it will use to perform proactive activation of devices. Furthermore, unlike aclassical decision tree based on crisp cut-points, the fuzzy variant can generate (aranked list of) secondary reasons for a decision. In this way, it not only providesmore detail about why a particular decision was taken but it also does so using ahuman conceptualisation, i.e. fuzzy categories. For example one can imagine think-ing: “yeah it is a bit hot but still somewhat mild”.

268 KEITH CHEVERST ET AL.

Figure 17. An early version of a (much disliked) modal dialog box for prompting the user.

7.4. interruption and user control

Uncontrollability and unpredictability are key factors responsible for the negativeand stressful impact caused by interruptions (Cohen, 1980). Our approach is toreduce these negative effects by providing a user of the system with appropriatelevels of control and comprehensibility (thus helping her to better predict systembehaviour).

The IOS enables the user to choose whether to be informed before a systemproactive behaviour takes place. For example, Figure 8 shows the prompt displayedby the system when the user has requested to be informed before her fan is turnedon. In effect these prompts can be viewed as interruptions to the user. Users cancontrol (over time) the extent to which these prompts can occur. For example, inthe current version of our system, as a user learns to increase her trust in the sys-tem’s decisions, she may choose (using the check boxes shown in Figures 8) lessprompting by the system. By doing so, she is effectively negotiating the level ofinterruption occurring (McFarlane and Latorella, 2002) and also sacrificing greatercontrol in order to permit more automatic behaviour (through the actuation ofdevices) to occur.

The IOS also provides the user with information regarding its learnt behav-iour. For example, Figure 8 shows the information (specifically the message “Ibelieve that you want to turn the fan off (confidence level: High)” presented tothe user when a rule to turn off the fan is triggered. Again, the provision of suchinformation can be viewed as a form of interruption. But the user may chose toignore this information or to act on it by, in this case, pressing the ‘OFF’ button.

An earlier version of the system used a significantly more disruptive approachto prompting the user: a modal dialog box such as that shown in Figure 17, whichwas designed to appear on the user’s primary desktop monitor. The change to aless disruptive interface was made following user feedback from those running thisearlier version of the system.

In addition to providing prompts regarding the system’s learnt behaviour,the IOS also attempts to promote comprehensibility (and thus reduce itsunpredictability) by supporting scrutability, i.e. enabling the user to access the rulesgoverning system behaviour. For example, Figure 8 illustrates how the user is giventhe opportunity to scrutinise the currently triggered behaviour.

EXPLORING ISSUES OF USER MODEL TRANSPARENCY 269

8. Future Work

Our immediate future work is to deploy the system (or at least a variation ofit) into the offices of less technical users and also to move towards deploymentin a home environment for extended periods of use/study. It will be interestingto observe whether there are significant differences to user acceptance of the sys-tem in home and office environments, given the findings of the questionnaire-basedstudy. We hope that the availability of the software part of the IOS on the web willalso enable other groups to extend the system and perform their own deploymentsand experimentation.

Furthermore, we want to extend the number of attributes captured by thesystem. This will not only enable the system to reason on the basis of richerinformation about the current state of the environment but it should also provebeneficial in dealing with values fluctuating around a cut-point (as decisions willnot be based on a single attribute alone). We are also interested in incorporatingtime-related factors such as the time of day, the day of the week and perhaps eventhe time of year. These attributes will help to capture more accurately the patternsof user behaviour, such as daily or seasonal routines.

9. Summary and Concluding Remarks

This article has described our exploration into the feasibility of utilising contexthistory with a fuzzy-decision-tree-based machine learning algorithm in order tosupport personalised but comprehensible proactive behaviour in an office environ-ment.

One key motivation for carrying out this research has been to explore both thetechnical and human factors issues associated with a context-aware proactive sys-tem such as the IOS. Consequently, a core element of our research has been toexplore issues of user control, comprehensibility and interruption. Comprehensibil-ity in the IOS is supported by allowing the user to scrutinise the rules learnt bythe system and the context history used to generate the rules (if this level of detailis requested by the user). Our questionnaire study revealed that in general partic-ipants were keen to scrutinise behaviour in this way. Furthermore, users involvedin the study had a positive response to the concepts behind the IOS but clearlyexpressed a desire for maintaining levels of control over the system’s proactivebehaviour.

By designing and deploying the Intelligent Office System we have achieved avalidation of the feasibility of collecting context history (using both ‘off the shelf ’and tailored sensing devices) and utilising this context history in order to learnpatterns of user behaviour which can, in turn, be used to drive proactive behav-iour (e.g. turning on a fan in the user’s office).

We strongly believe that through the actual deployment of a working system wehave managed to obtain far greater confidence in our understanding of the tech-

270 KEITH CHEVERST ET AL.

nical and human factors issues than would have been achievable had only partialsystem implementation and Wizard of Oz based studies taken place.

Acknowledgements

We would like to thank the reviewers of this paper and the guest editors ofthis special issue for their highly constructive comments. We would also wish toacknowledge the help provided by Martyn Burgess from Lancaster University. Thisresearch has, in part, been part funded by the EPSRC CASIDE project.

References

1. Abowd, G.D. and Mynatt, E.D.: 2000, Charting past, present and future researchin ubiquitous computing. ACM Transactions on Computer-Human Interaction, SpecialIssue on HCI in the New Millenium 7(1), 29–58.

2. Arroyo, E., Selker, T. and Stouffs, A.: 2002, Interruptions as multimodal outputs:Which are the less disruptive? Proceedings of the 4th IEEE International Conferenceon Multimodal Interfaces (ICMI′02). Pittsburgh, Pennsylvania, pp. 479–482.

3. Barkhuus, L. and Dey, A.K.: 2003, Is context-aware computing taking control awayfrom the user? Three levels of interactivity examined. Proceedings of UbiComp 2003:Ubiquitous Computing. Seattle, USA: Springer-Verlag, pp. 159–166.

4. Brumitt, B., Meyers, B., Krumm, J., Kern, A. and Shafer, S.: 2000, EasyLiving: tech-nologies for intelligent environments. Proceedings of the 2nd International Symposiumon Handheld and Ubiquitous Computing (HUC 2000). Bristol, UK: Springer-Verlag,pp. 12–29.

5. Byun, H.E. and Cheverst, K.: 2003, Supporting proactive “Intelligent” behaviour: theproblem of uncertainty. Proceedings of the UM03 Workshop on User Modeling forUbiquitous Computing. Johnstown, PA, pp. 17–25.

6. Byun, H.E. and Cheverst, K.: 2004, Utilising context history to provide dynamicadaptations. Journal of Applied Artificial Intelligence 18(6), 533–548.

7. Cheverst, K., Davies, N., Mitchell, K. and Efstratiou, C.: 2001, Using context as acrystal ball: rewards and pitfalls. Personal Technologies 3(5), 8–11.

8. Cheverst, K., Dix, A., Fitton, D. and Rouncefield, M.: 2003, “Out To Lunch”: explor-ing the sharing of personal context through office door displays. Proceedings of theInternational Conference of the Australian Computer-Human Interaction Special Inter-est Group (OzCHI’03). Brisbane, Australia: IEEE Computer Society Press, pp. 74–83.

9. Coen, M.: 1998, Design principles for intelligent environments. Proceedings of theTenth Conference on Innovative Applications of Artificial Intelligence. Madison, WI:AAAI Press, pp. 37–43.

10. Cohen, S.: 1980, After-effects of stress on human performance and social behavior: Areview of research and theory. Psychological Bulletin 88, 82–108.

11. Dey, A.K. and Abowd, G.D.: 2000, The context toolkit: aiding the development ofcontext-enabled applications. Proceedings of the Workshop on Software Engineering forWearable and Pervasive Computing. Limerick, Ireland.

12. Guetova, M., Holldobler, S. and Storr, H.: 2002, Incremental fuzzy decision trees.Proceedings of the 25th German Conference on Artificial Intelligence (KI2002).Aachen, Germany, pp. 67–81.

EXPLORING ISSUES OF USER MODEL TRANSPARENCY 271

13. Holmquist, L.E., Mattern, F., Schiele, B., Alahuhta, P., Beigl, M. and Gellersen,H.W.: 2001, Smart-its friends: a technique for users to easily establish connec-tions between smart artefacts. Proceedings of UbiComp 2001: Ubiquitous Computing.Atlanta, USA, pp. 116–122.

14. Horvitz, E.: 1999, Principles of mixed-initiative user interfaces. Proceedings of theCHI 1999 Conference on Human Factors in Computing Systems. New York: ACMPress, pp. 159–166.

15. Intille, S.S.: 2002, Designing a home of the future. IEEE Pervasive ComputingApril–June, 80–86.

16. Intille, S.S. and Larson, K.: 2003, Designing and evaluating supportive technology forhomes. Proceedings of the IEEE/ASME International Conference on Advanced Intelli-gent Mechatronics. Kobe, Japan: IEEE Press.

17. Jameson, A., Baldes, S., Bauer, M. and Kroner, A.: 2004, Resolving the tensionbetween invisibility and transparency. Proceedings of 1st International Workshop onInvisible and Transparent Interfaces. Gallipoli, Italy, pp. 29–33.

18. Janikow, C.Z.: 1996, Exemplar learning in fuzzy decision trees. Proceedings of theConference on Fuzzy Systems (FUZZIEEE ’96). New Orleans, pp. 1500–1505.

19. Karlgren, J., Hook, K., Lanz, A., Palme, J. and Pargman, D.: 1994, The glass boxuser model for information filtering. Proceedings of the 4th International Conferenceon User Modeling (UM’94). Hyannis, MA.

20. Kay, J., Kummerfeld, R.J., and Lauder, P.: 2003, Managing private user models andshared personas. Proceedings of the UM03 Workshop on User Modeling for UbiquitousComputing. Johnstown, PA, pp. 1–11.

21. Mantyjarvi, J. and Seppanen, T.: 2002, Adapting applications in mobile terminalsusing fuzzy context information. Fourth International Symposium on Human-Com-puter Interaction with Mobile Devices (MobileHCI 2002). Pisa, Italy: Springer-Verlag,pp. 95–107.

22. McFarlane D.C. and Latorella, K.A.: 2002, The scope and importance of humaninterruption in human-computer interaction design. Human-Computer Interaction17(1), 1–61.

23. Mitchell, T.M., Caruana, R., Freitag, D., McDermott, J. and Zabowski, D.: 1994,Experience with a learning personal assistant. Communications of the ACM 37(7),80–91.

24. Mozer, M.C. and Miller, D.: 1998, Parsing the stream of time: the value ofevent-based segmentation in a complex real-world control problem. In: C. L. Gilesand M. Gori (eds.), Adaptive Processing of Temporal Information. Berlin: Springer,pp. 370–388.

25. Pohl. W.: 1996, Learning about the user – user modeling and machine learn-ing. Proceedings of Machine Learning Meets Human-Computer Interaction (ICML’96Workshop). pp. 29–40.

26. Rodin, J. and Langer, E.: 1977, Long-term effects of a control-relevant interventionwith the institutionalized aged. Journal of Personality and Social Psychology 35(12),897–902.

27. Rubinstein, J.S., Meyer, D.E. and Evans, J.E.: 2001, Executive control of cognitiveprocesses in task switching. Journal of Experimental Psychology: Human Perceptionand Performance 27(4), 763–796.

28. Salovaara, A. and Oulasvirta, A.: 2004, Six modes of proactive resource management:a user-centric typology for proactive behaviors. Proceedings of the Third Nordic con-ference on Human-Computer Interaction. Tampere, Finland: ACM Press, pp. 57–60.

272 KEITH CHEVERST ET AL.

29. Shiu S.C.K., Sun, C.H., Wang, X.Z. and Yeung, D.S.: 2000, Maintaining case-basedreasoning system using fuzzy decision trees. Advances in Case-Based Reasoning, 5thEuropean Workshop (EWCBR 2000). Trento, Italy: Springer, pp. 285–296.

30. Speier, C., Valacich, J.S. and Vessey, I.: 1997, The effects of task interruption andinformation presentation on individual decision making. Proceedings of the 18th Inter-national Conference on Information Systems. New York: Association for ComputingMachinery, pp. 21–36.

31. Weiser, M.: 1991, The computer for the 21st century. Scientific American 265(3),94–104.

32. X10 Limited: 2004, What is X10? http://www.smarthome.com/about x10.html33. Zeidler, J. and Schlosser, M.: 1996, Continuous-valued attributes in fuzzy deci-

sion trees. Proceedings of the 6th International Conference on Information Process-ing and Management of Uncertainty in Knowledge-Based Systems. Granada, Spain,pp. 395–400.

Authors’ Vitae

Dr. K. CheverstDepartment of Computing, Lancaster University, Lancaster, UK. Keith Cheverst isa Senior Lecturer with the Computing Department of Lancaster University. Hisresearch interests span: Mobile and Ubiquitous Computing technologies, HumanComputer Interaction and CSCW and he has published over 80 research papersacross these fields. He is also a keen advocate of User Centered Design and is cur-rently the Principal Investigator on the EPSRC funded CASIDE project (www.ca-side.lancs.ac.uk).exploring the design and use of situated display technologies.

Mr. Hee Eon ByunDepartment of Computing, Lancaster University, Lancaster, UK. Hee Eon Byun is aPh.D. candidate in Computing at Lancaster University. He received his BSc. in Com-puter Science from Sogang University in 1989, and his M.Sc. degree in Environmentaland Ecological Sciences from Lancaster University in 1999. His primary interests liein the areas of user modelling and machine learning. The research for his thesis workfocuses on the utilisation of context history and machine learning in order to supportproactive services and the HCI implications associated with such an approach.

Dr. Corina SasDepartment of Computing, Lancaster University, Lancaster, UK. Corina Sas is alecturer in HCI at the Computing Department, Lancaster University. She receivedher bachelor degrees in Computer Science and Psychology and masters degree inIndustrial Psychology from Romania. She received her PhD in Computer Sciencefrom University College Dublin in 2004. Her research interests include user mod-elling, adaptive systems and usability studies. She has published in journals andinternational conferences in these areas.

EXPLORING ISSUES OF USER MODEL TRANSPARENCY 273

Mr. Daniel FittonDepartment of Computing, Lancaster University, Lancaster, UK. Daniel Fitton iscurrently a research associate on the EPSRC funded CASIDE project (which aimsto investigate such issues as how situated display technology can influences andfacilitates coordination and community). His research mainly involves the rapid-prototyping, deployment and investigation into the use of new and novel situateddisplay based systems for use in the ′real world′. He is presently in the process ofwriting up his thesis part-time, which focuses on the support of asynchronous mes-saging using interactive situated technology.

Dr. Christian KrayDepartment of Computing, Lancaster University, Lancaster, UK. Christian Krayreceived his diploma degree in Computer Science from Saarland University in1998, and his Ph.D. in Computer Science from Saarland University in 2003. Hehas worked as a full-time researcher at the German Research Centre for Artifi-cial Intelligence (DFKI) in Saarbruecken, Germany, where his research focussedon situated interaction on spatial topics in the context of mobile applications.Since 2003 he has beena post-doctoral researcher at Lancaster University, wherehe investigates spatial reasoning and novel interaction techniques in the context ofubiquitous computing. His contribution is based on his background in applied AIand experiences gained from deploying a number of prototypical applications withnovel interaction metaphors.

Mr. Nicolas VillarDepartment of Computing, Lancaster University, Lancaster, UK. Nicolas Villar is aresearch associate and part-time PhD student in the Embedded Interactive Systemsresearch group of the Lancaster University Computing Department. His interestslie in developing the technologies to enable novel forms of human computer inter-action. In particular, his research explores the possibilities of tangible interfacesand other forms of interaction through real world objects.



Recommended