Audrey Kaplan, Workplace Diagnostics Ltd.
To date, building performance assessments have been limited, for the most part, to large organizations and institutions. The substantial investment of time and money needed to mount such assessments using traditional methods is a major hurdle for most midsize and small organizations or for niche groups. Relative to the number of facilities and organizations that could benefit from such assessments, too few are done to make them an effective tool for aligning occupant or stakeholder expectations with the finished product. Advances in electronic communications and easy-to-use Web browsers now make it attractive to conduct these assessments via the Internet or an intranet. “Cybersurveys” or “e-surveys” (polls or assessments administered electronically) represent the breakthrough in social science research that will make this work cheaper and more effective to complete. Moreover, the innovative medium will likely inspire the invention of wholly new work and objectives that could not be done using existing assessment methods (Bainbridge, 1999).
Regardless of the medium, all assessment work begins with good research design, clearly stated objectives, solid planning, and preparation to conduct a survey, complete the analysis, and produce reports. This chapter addresses issues related to implementing cybersurveys and assumes that sound principles of building performance assessment are in place. Information technology has unique features to gather feedback about building performance, which in turn can be used to improve facility management and acquisition.
As the set of Internet users begins to reflect the population in general, or a specific group under study, cybersurveys may become the predominant method of administering building assessments. If widely used, the assessment process can be an effective tool for continually improving the value of each facility, regardless of its size or unique occupancy.
Cybersurveys complement existing survey methods of assessment, such as paper, mail, phone, in-person interview, site visits, expert observations, photography, and instrument measurements.
Instruments to assess buildings continue to improve steadily. Prompted by the energy crisis of the 1970s, instruments were developed to measure and/or diagnose the effectiveness of building systems (e.g., walls; windows; roofs; heating, ventilation, and air conditioning; lighting). Generic methodologies were conceived to diagnose total building systems, and tools were designed to assess the performance of specific building aspects. See, for example, A Generic Methodology for Thermographic Diagnosis of Building Enclosures (Mill and Kaplan, 1982) for a methodology and tools to determine heat loss and uncontrolled air flow across building enclosures. The objective of methodologies and tools developed at that time was to better understand the cause of deficiencies—be they rooted in design or construction—and then to prescribe corrective actions and processes to prevent future occurrences.
In the 1980s, the heightened concern for the quality of interior environments drove the development of new tools to better represent the total environmental setting that occupants experience. Many traditional lab instruments were modified (or put on carts) and brought into occupied buildings. The focus was to record environ-
mental factors that influence human comfort and performance (e.g., light, sound, temperature, relative humidity, air quality, spatial configuration, aesthetics). These instruments were often cumbersome or too delicate for such robust applications, so manufacturers and researchers alike redesigned them to better match the field conditions and the nature of the data being collected (see, for example, Schiller et al., 1988, 1989). Some of the features modified to make reliable field instruments included narrowing the instruments’ range, adjusting the sample duration and frequency, placing sensors to record levels in occupied zones (as opposed to air supply ducts, etc.), and immediate statistical analysis of select field data for preliminary interpretation to guide the balance of the data collection. Since then, there have been steady, incremental improvements to the set of field instruments, benefiting from miniaturization and faster processing as the whole computer industry developed.
Academic centers continue to advance the use of instruments to assess building performance. See, for example, the Center for Environmental Design Research (also known as the Center for the Built Environment) at the University of California, Berkeley, www.cbe.berkeley.edu, and Cornell University, Ithaca, New York, ergo.human.cornell.edu. Dependent upon funding to support building performance programs, these and other institutes lend instruments and materials to architecture, engineering, and facility management schools. These lending programs are intended to encourage the next generation of built environment professionals to build and manage environmentally responsible and energy-efficient structures. For example, the Vital Signs program at the University of California, Berkeley (1996), assembled tools to measure a range of building performance attributes. These were packaged into kits that educators could borrow for their classes. Support funding has since expired, however other programs carry on toward the same objectives. See, for example, the occupant survey project <www.cbe.berkeley.edu/cedr/cbe>.
Improvements continue in the measurement of the physical built environment, but there are no major breakthroughs or fundamental changes in how or what is measured. Similarly, important advances in opinion research, such as telephone interviews, did not fundamentally change the way data were collected and analyzed or questionnaires were designed (Taylor, 2000). However, the visual and responsive characteristics of cybersurveys offer new and significant opportunities in the way building assessments are conceived and completed. The author sees cybersurveys as the next technology-based advance in building performance assessments. The remainder of this chapter discusses how this technology is used to assess building performance.
The unique features of computer technology (e.g., Web, e-mail, electronic communications) improve upon and add efficiency to the polling process. The traditional methods of surveying people—telephone, mail, or in person—are expensive and tend to be used now only by large organizations. The negligible cost to distribute surveys over the Internet or intranet and the potential to reach far and wide are changing the survey industry. Now, small and medium-size businesses use cybersurveys as a means to gather valuable information from customers, potential customers, employees, or the general public. Readily available survey development products help the nonspecialist to build a Web survey questionnaire, publish it on the Web, collect the data, and then analyze and report the results (see Appendix E). These tools address how to conduct the research and the mechanics of data collection. They do not, and cannot, replace the management decision to conduct the inquiry, how to interpret the numbers, or how to get actionable results from the effort.
Cybersurveys are distributed as e-mail to a specific address (either embedded in the message or as an attachment) or openly on the Web. There are more control and layout options with Web-based surveys than with e-mail. With HTML (hypertext markup language), attractive and inviting forms can be created. Some of the survey software packages have automated routines to design the survey’s physical appearance.
The cost of initial data collection is high—whether for personnel to conduct interviews and code data or for setting up a computer-based survey to automatically solicit and compile data. However, the biggest issues currently facing cybersurveys are the low response rate (compared to traditional methods and relative to the potential based on number of visits to a site) and ensuring a proper sample. These issues are discussed in the next two sections, followed by lessons learned from Web-based surveys.
It is widely agreed that the more attempts made to contact respondents, the greater are the chances of their returning a survey. Similarly, a greater number of response formats increases the likelihood of more responses. Multiple contacts offering alternate ways of responding with appropriate personalization, singly and in combination, increase response rate. For example, Schaefer and Dillman (1998) reported from published studies the average response rate for e-mail surveys. With a single contact, the response rate was 28.5 percent, with two contacts it was 41 percent, and with three or more contacts the response rate was 57 percent. Cybersurveys offer another contact vehicle and method of response (e.g., interactive display, print and return) to the options already available. In and of itself, use of this vehicle contributes to higher response rates.
A consistent finding is that cybersurveys are returned more quickly than mail responses. In comparing several studies that used both e-surveys and paper (mail and fax), response speeds ranged from 5 to 10 days for electronic distribution versus 10-15 days for mail (Sheehan and McMillan, 1999).
To date, response rates from e-surveys have been lower than for postal mail. E-mail rates range from 6 to 75 percent. However the electronic and mail surveys used for this comparison (Sheehan and McMillan, 1999) had small sample sizes (less than 200) and varied widely in survey topic and participants’ characteristics, so the spread is not surprising. E-surveys often achieve a 20 percent response, which is half of the rate usually obtained by mail or phone surveys. Some e-surveys have had less than 1 percent return of the possible responses, based on the number of hits to a Web site (Basi, 1999). There were some exceptions to this trend in the early 1990s when e-surveys were distributed to workers in electronic-related businesses (e.g., telephone and high-tech sectors). At that time, there was still a high-tech allure and/or novelty to receiving e-mail, so response rates were acceptable (greater than 60 percent). This highlights how quickly the field is changing. Lessons learned from early cybersurveys may not necessarily apply in a current context. See Appendix E for a discussion of the changing context of on-line communications.
Other reasons for low participation in on-line surveys may involve a reluctance to share one’s views in a nontraditional environment. However, for those who are comfortable with this arrangement, replies to open-ended questions are richer, longer, and more revealing than those from other methods (Taylor, 2000). Cyber-surveys are nontangible. Paper surveys may sit on someone’s desk and serve to remind him or her to complete them. Individuals intending to complete a cybersurvey may bookmark the survey’s URL (uniform resource locator) for later use but then forget to retrieve it.
The Web is worldwide. People anywhere on the planet can answer a survey posted on the Web even if they are not part of the intended audience (unless controls are used, such as a unique URL, a password, or setting the meta tags so that the page is not picked up by search engines). To date, cybersurveys are completed by accidental volunteers (they stumbled upon the survey) and self-selected individuals who choose to participate.
Because there is no central registry of Web users, one does not know who is on-line, so one cannot reliably reach a target population (see Appendix E for details of who is on-line and where they are geographically). Moreover, there is no control on how electronic messages are cascaded or linked to other sites. A message can be directed to select individuals or groups who, for their own reasons, send it along to others. This behavior makes it difficult to ensure that the survey reaches a random or representative sample of respondents. Despite these sampling difficulties, careful use of nonprobability sampling can produce results that represent a specific subset of the population (Babbie, 1990).
The following recommendations for Web surveys are adapted from published papers, Web sites, and the author’s experience (Bradley, 1999; Kaye and Johnson, 1999; Sheehan and McMillan, 1999; Perseus, 2000). Recommendations are organized under the topics of Web survey design considerations, sampling, publicity, and data collection and responses.
Keep the survey as short as possible for quick completion and minimum scrolling.
TABLE 6-1 Internet Users, by Region, Who Prefer to Use the Web in English Rather than Their Native Tongue.
|
Region |
% Who Prefer English |
|
North America |
74 |
|
Africa, Middle East |
72 |
|
Eastern Europe |
67 |
|
Asia |
62 |
|
Western Europe |
51 |
|
Latin America |
48 |
|
Source: USA Today (2001). |
|
Use simple designs with only necessary graphics to save on download time.
Use drop-down boxes to save space, reduce clutter, and avoid repeating responses.
State instructions clearly.
Give a cutoff date for responses.
Personalize the survey (e.g., use recipient’s e-mail address in a pre-notification letter; identify the survey’s sponsor, which can also add credibility).
Assure respondents that their privacy will be protected—both their e-mail address and the safety of their computer system (virus free).
Conduct pre-tests to measure time and ease for completing the survey. Electronic pre-testing is easier and cheaper than traditional methods.
Try to complete the survey using different browsers to uncover browser-based design flaws.
English-language cyber-surveys can easily overcome geographic barriers (Swoboda et al., 1997). See Table 6-1 for the percentage of Internet users who prefer to use the Web in English rather than their native language.
To generalize the results to a wider population, define samples as subsets of Web users based on some specific characteristic.
Solicit responses from the target population. This can be done by linking the survey from select sites, by posting announcements on discussion-type sites that the target population is likely to use, or by selecting e-mail addresses posted on key Usenet newsgroups, listserves, and chat forums. Invited participants should be given an identification number and password, that are both required to access the Web-based survey.
Clearly state the intended audience in the introduction to the survey so, hopefully, only those to whom it applies will respond.
Systematically publicize the survey daily through various means. Create awareness of the survey at a wide variety of Internet or intranet outlets to reduce bias. “Pop-up” surveys may attract more responses than simple banner invitations that are fixed to a page. Advertising on select sites might increase the number of completions but does not continually invite (spam) a few discussion groups while ignoring others. See Table 6-2 for methods to invite people to a survey.
Pre-notify respondents who would find the survey relevant (e.g., those to whom an important issue is current or timely).
Do not go overboard on publicity, just announce the survey.
List the survey with as many of the major search engines as possible. Services, such as “Submit It,” send listings to many search engines with just one entry. Then use different search strategies and terms to locate the survey and uncover glitches or errors.
TABLE 6-2 Methods to Attract Respondents to a Cybersurvey.
|
Announcements |
Browsing intercept software |
Customer records |
|
E-mail directories |
Harvested addresses |
Hypertext links |
|
Interest group members |
Invitations (banners, letters, etc.) |
Pop-up surveys |
|
Printed directories |
Registration forms |
Snowballing |
|
Staff records |
Subscribers |
Web site directories |
|
Source: Bradley (1999). |
||
Check that the survey links remain posted and that the URL is visible on the page.
Write the complete URL in announcements or messages because it can be an easy, “clickable” link in some e-mail transmissions.
Ask respondents how they found out about the survey to gauge which sites and discussion outlets were most effective in reaching the target audience.
If appropriate and/or feasible, offer incentives for completing the survey (ranging from the survey results to lottery tickets, money, or discounts for a third-party product). Check the laws governing incentives because these vary by jurisdiction.
Design the survey so that it is submitted with a simple click of the mouse or keystroke.
Upon receiving the survey, set up for an automatic thank-you reply to the sender saying that the survey was successfully transmitted.
To check for duplicate responses, ask for respondents’ e-mail address and/or track senders’ Internet or intranet protocol address.
For ease of importing e-mailed data into a statistical software program, design the return forms so that each question is listed on one line followed by its response and a corresponding numerical value.
People may respond to electronically viewed scales differently than to scales that are spoken or on paper. Taylor (2000) observed that fewer people picked the extremes on e-scales than when they heard the questions and response options.
The Internet or intranet and World Wide Web enable traditional building performance assessments to be accomplished more cheaply and effectively. The electronic medium may well become the primary survey vehicle owing to its convenience, low-cost of distribution and return, ability to verify (check for errors) and receive data—including rich text replies—in electronic format, and the fact that it is an easy way to give respondents feedback. All this is done incredibly fast with an attractive, visual presentation.
The opportunities presented by the medium itself will likely lead to the invention of new products, methods, and presentations for building performance assessments. For example, new lines of research might be conceived that integrate occupants’ perception of interior environments with data from the building control system and/or weather conditions or that track the use of facilities as recorded by smart cards and/or the spatial coordinates of workspaces as recorded by computer-aided drafting systems.
Select opinion leaders and/or professionals could be drawn to an Internet-based “brain trust” that focuses on special questions related to the built environment. The Internet or intranet and Web are excellent media for establishing virtual libraries and databases that many can access and use. Establishing sites with existing data allows other researchers to produce interpretations, conclusions, or knowledge that is in some way different from that produced in the original inquiry (i.e., secondary analysis). This work would economically and efficiently add to the experience and knowledge of building performance assessments.
With cybersurveys, questions can be asked of anyone, worldwide, who is on-line. It is feasible to poll a wide population about issues that impact a specific building. The sample is not limited to those who work in, visit, or are somehow involved with a building. Tapping into these opinions becomes important when there is a highly visible construction, a new and important public structure, or a controversy connected with a facility. Public opinion might provide valuable insight to how a building, its occupants, or a particular issue is viewed. For example, an incidence of sick-building syndrome or an environmental disaster might be officially resolved to those managing a facility, but the local community could remain suspicious and unsupportive of the organization. Using wide-reaching cybersurveys to tap into that sentiment gives insight that might influence decisions regarding the building’s publicity, image, operational management, and the communication to users and other interested parties of what was done.
There is great value in being able to reach a population that is not yet at a facility. For example, high school students are actively on-line and, through cybersurveys, can be asked their expectations for their university experience—how they will live, learn, study, and make friends. It would be insightful to determine that they have, for example, no desire to study in traditional libraries or in their dormitory rooms. They may expect “cybercafés,” collaborative work settings, or learning environments that ease their transition into work environments where they will earn a livelihood. Through
select demographic questions, survey researchers can characterize the future university population and suggest built environment features compatible with their expectations. Universities may use these as features to attract top students and then to elicit the best work from enrolled students.
Much of the young, skilled work force in the technology sector comes from outside North America and from different cultures. Each has its own expectation of work and the workplace. Here again, using cyber-surveys to understand their expectations of the workplace and to educate them about North American workplaces can speed their integration into the work force. There is a strong business case to support this use of cybersurveys since they readily translate into business objectives of worker productivity and reduced time to market for products.
Occupant feedback is difficult to get, or unreliable, when people are stressed, such as when they need hospitals, senior residences, and funeral parlors. However, after the crisis has passed and their emotions are stable, they could offer insight to aspects of the physical setting that helped them at the time or that would have been of benefit. Cybersurveys can reach these people at a time when they can offer measured opinions on the built environment.
The rewards of cybersurveys are so rich that their potential will surely be used and the methodological and sampling issues of today will be resolved. It is hard to imagine future building performance assessments without extensive use of the Internet or intranet and the Web. However, traditional methods will continue to be used, perhaps becoming a smaller part of the overall work. Printing did not fully replace handwriting, radio did not take the place of newspapers, and television did not supplant movies or radio. Cybersurveys enable many things that could not be done or afforded using traditional methods. The addition of this medium to the tools for building performance assessment greatly enhances the accessibility and value of such data and of the evaluation process itself.
Audrey Kaplan is president of Workplace Diagnostics, Ltd., an Ottawa-based consulting company that specializes in the evaluation and design of workspace. Ms. Kaplan has been actively involved in building performance as a research scientist and consultant for 20 years and has published widely in the field. Her research activities range from the evaluation of comfort and environmental quality in offices, to the assessment of total building performance, to the design of workstations with personal environmental controls. In addition to consulting, Ms. Kaplan is a regular speaker at professional conferences and continuing education seminars, and is a certified instructor for professional training courses with the International Facility Management Association (IFMA). She was an adjunct professor at the University of Manitoba and served on IFMA’s international board as a director from 1998-2000, as chair of IFMA’s Canadian Foundation; and as a trustee of IFMA’s foundation. Ms. Kaplan received IFMA’s 1996 Distinguished Author Award for her co-authored book Total Workplace Performance: Rethinking the Office Environment. She holds a bachelor of architecture from Carleton University and a master of science in architecture from Carnegie Mellon University.
Babbie, E. (1990). Survey Research Methods. Belmont, California: Wadsworth.
Bainbridge, W. (1999). Cyberspace: Sociology’s natural domain. Contemporary Sociology 28(6):664-667.
Basi, R. (1999). WWW response rates to socio-demographic items. Journal of the Market Research Society 41(4):397-401.
Bradley, N. (1999). Sampling for Internet surveys: An examination of respondent selection for Internet research. Journal of the Market Research Society 41(4):387-395.
Center for Environmental Design Research (known as the Center for the Built Environment) (1996). Vital Signs. Berkeley: University of California (www.cbe.berkeley.edu).
Kaye, B., and Johnson, T. (1999). Research methodology: Taming the cyber frontier. Social Science Computer Review 17(3):323-337.
Mill, P., and Kaplan, A. (1982). A Generic Methodology for Thermographic Diagnosis of Building Enclosures. Public Works Canada. Report Series No. 30.
Perseus. (2000). Survey 101—A complete guide to a successful survey (www.perseus_101b.htm).
Pike, P. (2001). Technology rant: I’m tired of feeling incompetent! PikeNet February 11 (www.pikenet.com).
Schaefer, D., and Dillman, D. (1998). Development of a standard e-mail methodology. Public Opinion Quarterly 62:378-397.
Schiller, G., et al. (1988). Thermal Environments and Comfort in Office Buildings. Berkeley: Center for Environmental Design Research, CEDR-02-89
Schiller, G., et al. (1989). Thermal comfort in office buildings. ASHRAE Journal October 26-32.
Sheehan, K., and McMillan, S. (1999). Response variation in e-mail surveys: An exploration. Journal of Advertising Research 39(4):45-54.
Swoboda, W., Muhlberger, N., Weitkunat, R., and Schneeweib, S. (1997). Internet surveys by direct mailing. Social Science Computer Review 15(3):242-255.
Taylor, H. 2000. Does Internet research work? International Journal of Market Research 42(1):51-63.
USA Today (2001). Searching the Web in native language. February 27, p. 7B.