Information systems management handbook

IS Management HANDBOOK 8 TH E D I T I O N OTHER AUERBACH PUBLICATIONS The ABCs of IP Addressing Gilbert Held ISBN: 0-...

1 downloads 127 Views 5MB Size
IS Management HANDBOOK 8 TH E D I T I O N

OTHER AUERBACH PUBLICATIONS The ABCs of IP Addressing Gilbert Held ISBN: 0-8493-1144-6 The ABCs of TCP/IP Gilbert Held ISBN: 0-8493-1463-1 Building an Information Security Awareness Program Mark B. Desman ISBN: 0-8493-0116-5 The Complete Book of Middleware Judith Myerson ISBN: 0-8493-1272-8 Computer Telephony Integration, 2nd Edition William A. Yarberry, Jr. ISBN: 0-8493-1438-0 Global Information Warfare: How Businesses, Governments, and Others Achieve Objectives and Attain Competitive Advantages Andy Jones, Gerald L. Kovacich, and Perry G. Luzwick ISBN: 0-8493-1114-4 Information Security Architecture Jan Killmeyer Tudor ISBN: 0-8493-9988-2 Information Security Management Handbook, 4th Edition, Volume 1 Harold F. Tipton and Micki Krause, Editors ISBN: 0-8493-9829-0 Information Security Management Handbook, 4th Edition, Volume 2 Harold F. Tipton and Micki Krause, Editors ISBN: 0-8493-0800-3 Information Security Management Handbook, 4th Edition, Volume 3 Harold F. Tipton and Micki Krause, Editors ISBN: 0-8493-1127-6 Information Security Management Handbook, 4th Edition, Volume 4 Harold F. Tipton and Micki Krause, Editors ISBN: 0-8493-1518-2

Information Security Policies, Procedures, and Standards: Guidelines for Effective Information Security Management Thomas R. Peltier ISBN: 0-8493-1137-3 Information Security Risk Analysis Thomas R. Peltier ISBN: 0-8493-0880-1 A Practical Guide to Security Engineering and Information Assurance Debra Herrmann ISBN: 0-8493-1163-2 The Privacy Papers: Managing Technology and Consumers, Employee, and Legislative Action Rebecca Herold ISBN: 0-8493-1248-5 Securing and Controlling Cisco Routers Peter T. Davis ISBN: 0-8493-1290-6 Securing E-Business Applications and Communications Jonathan S. Held and John R. Bowers ISBN: 0-8493-0963-8 Securing Windows NT/2000: From Policies to Firewalls Michael A. Simonyi ISBN: 0-8493-1261-2 Six Sigma Software Development Christine B. Tayntor ISBN: 0-8493-1193-4 A Technical Guide to IPSec Virtual Private Networks James S. Tiller ISBN: 0-8493-0876-3 Telecommunications Cost Management Brian DiMarsico, Thomas Phelps IV, and William A. Yarberry, Jr. ISBN: 0-8493-1101-2

AUERBACH PUBLICATIONS www.auerbach-publications.com To Order Call: 1-800-272-7737 • Fax: 1-800-374-3401 E-mail: [email protected]

IS Management HANDBOOK 8 TH E D I T I O N Carol V. Brown Heikki Topi EDITORS

AUERBACH PUBLICATIONS A CRC Press Company Boca Raton London New York Washington, D.C.

This edition published in the Taylor & Francis e-Library, 2005. “To purchase your own copy of this or any of Taylor & Francis or Routledge’s collection of thousands of eBooks please go to www.eBookstore.tandf.co.uk.”

Library of Congress Cataloging-in-Publication Data Information systems management handbook / editors, Carol V. Brown, Heikki Topi. — 8th ed. p. cm. ISBN 0-8493-1595-6 1. Information resources management — Handbooks, manuals, etc. I. Brown, Carol V. (Carol Vanderbilt), 1945– II. Topi, Heikki. T58.64.I5338 2003 658.4d038—dc21

2003041798

This book contains information obtained from authentic and highly regarded sources. Reprinted material is quoted with permission, and sources are indicated. A wide variety of references are listed. Reasonable efforts have been made to publish reliable data and information, but the author and the publisher cannot assume responsibility for the validity of all materials or for the consequences of their use. Neither this book nor any part may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, microfilming, and recording, or by any information storage or retrieval system, without prior permission in writing from the publisher. All rights reserved. Authorization to photocopy items for internal or personal use, or the personal or internal use of specific clients, may be granted by CRC Press LLC, provided that $1.50 per page photocopied is paid directly to Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923 USA. The fee code for users of the Transactional Reporting Service is ISBN 0-8493-15956/03/$0.00+$1.50. The fee is subject to change without notice. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged. The consent of CRC Press LLC does not extend to copying for general distribution, for promotion, for creating new works, or for resale. Specific permission must be obtained in writing from CRC Press LLC for such copying. Direct all inquiries to CRC Press LLC, 2000 N.W. Corporate Blvd., Boca Raton, Florida 33431. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation, without intent to infringe.

Visit the CRC Press Web site at www.auerbach-publications.com © 2003 by CRC Press LLC Auerbach is an imprint of CRC Press LLC No claim to original U.S. Government works International Standard Book Number 0-8493-1595-6 Library of Congress Card Number 2003041798

ISBN 0-203-50427-5 Master e-book ISBN

ISBN 0-203-58711-1 (Adobe eReader Format)

Contributors SANDRA D. ALLEN-SENFT, Corporate IS Audit Manager, Farmers Insurance, Alta Loma, California BRIDGET ALLGOOD, Senior Lecturer, Information Systems, University College, North Hampton, England BARTON S. BOLTON, Consultant, Lifetime Learning, Upton, Massachusetts BIJOY BORDOLOI, Professor, School of Business, Southern Illinois University-Edwardsville, Edwardsville, Illinois BRENT J. BOWMAN, Associate Professor, College of Business Administration, University of Nevada-Reno, Reno, Nevada THOMAS J. BRAY, President and Principal Security Consultant, SecureImpact, Atlanta, Georgia CAROL V. BROWN, Associate Professor, Kelley School of Business, Indiana University, Bloomington, Indiana JANET BUTLER, Consultant, Rancho de Taos, New Mexico DONALD R. CHAND, Professor, Bentley College, Waltham, Massachusetts LEI-DA CHEN, Assistant Professor, College of Business Administration, Creighton University, Omaha, Nebraska TIM CLARK, Senior Systems Engineer, Cylink Corporation, Santa Clara, California HAMDAH DAVEY, Finance Manager, Tibbett Britten, United Kingdom NICHOLAS ECONOMIDES, Professor, Stern School of Business, New York University, New York, New York JOHN ERICKSON, Ph.D. Student, College of Business Administration, University of Nebraska-Lincoln, Lincoln, Nebraska MARK N. FROLICK, Associate Professor, Fogelman College of Business, University of Memphis, Memphis, Tennessee FREDERICK GALLEGOS, IS Audit Advisor and Faculty Member, College of Business Administration, California State Polytechnic University, Pomona, California TIMOTHY GARCIA-JAY, Project Director, St. Mary’s Hospital, Reno, Nevada JAMES E. GASKIN, Consultant, Mesquite, Texas HAYWOOD M. GELMAN, Consulting Systems Engineer, Cisco Systems, Lexington, Massachusetts ROBERT L. GLASS, President, Computing Trends, Bloomington, Indiana v

Information Systems Management Handbook FRITZ H. GRUPE, Professor, College of Business Administration, University of Nevada-Reno, Reno, Nevada UMA G. GUPTA, Dean, College of Technology, University of Houston, Houston, Texas GARY HACKBARTH, Assistant Professor, College of Business, Iowa State University, Ames, Iowa LINDA G. HAYES, Chief Executive Officer, WorkSoft, Inc., Dallas, Texas ROBERT L. HECKMAN, Assistant Professor, School of Information Studies, Syracuse University, Syracuse, New York LUKE HOHMANN, Consultant, Luke Hohmann Consulting, Sunnyvale, California RAY HOVING, Consultant, Ray Hoving and Associates, New Tripoli, Pennsylvania ZHENYU HUANG, Ph.D. Student, Fogelman College of Business, University of Memphis, Memphis, Tennessee CARL B. JACKSON, Vice President, Business Continuity Planning, QinetiQ Trusted Information Management Corporation, Worcester, Massachusetts RON JEFFRIES, Consultant, Xprogramming.com DIANA JOVIN, Market Development Manager, NetDynamics, Inc., Menlo Park, California RICHARD M. KESNER, Director of Enterprise Operations, Northeastern University, Boston, Massachusetts WILLIAM J. KETTINGER, Director, Center for Information Management and Technology Research, The Darla Moore School of Business, University of South Carolina, Columbia WILLIAM R. KING, Professor, Katz Graduate School of Business, University of Pittsburgh, Pittsburgh, Pennsylvania CHRISTOPHER KLAUS, Founder and Chief Technology Officer, Internet Security Systems, Atlanta, Georgia RAVINDRA KROVI, Associate Professor, College of Business Administration, University of Akron, Akron, Ohio WILLIAM KUECHLER, Assistant Professor, College of Business Administration, University of Nevada-Reno, Reno, Nevada MIKE KWIATKOWSKI, Consultant, Dallas, Texas RICHARD B. LANZA, Manager of Process, Business and Technology Integration Team, American Institute of Certified Public Accountants, Falls Church, Virginia JOO-ENG LEE-PARTRIDGE, Associate Professor, National University of Singapore, Singapore LISA M. LINDGREN, Consultant, Gilford, New Hampshire LOWELL LINDSTROM, Vice President, Business Coach, Object Mentor, Vernon Hills, Illinois ALDORA LOUW, Senior Associate, Global Risk Management Solutions Group, PricewaterhouseCoopers, Houston, Texas JERRY LUFTMAN, Professor, Howe School of Technology Management, Stevens Institute of Technology, Hoboken, New Jersey vi

Contributors ANNE P. MASSEY, Professor, Kelley School of Business, Indiana University, Bloomington, Indiana PETER MELL, Computer Security Division, National Institute of Standards and Technology, Gaithersburg, Maryland N. DEAN MEYER, President, N. Dean Meyer Associates, Ridgefield, Connecticut JOHN P. MURRAY, Consultant, Madison, Wisconsin STEFAN M. NEIKES, Data Analyst, Tandy Corporation, Watuga, Texas FRED NIEDERMAN, Associate Professor, School of Business and Administration, Saint Louis University, St. Louis, Missouri STEVE NORMAN, Manager, Oracle Corporation; and Honorarium Instructor, University of Colorado at Colorado Springs, Colorado Springs, Colorado POLLY PERRYMAN KUVER, Consultant, Boston, Massachusetts MAHESH RAISINGHANI, Director of Research, Center for Applied Technology and Faculty Member, E-Commerce and Information Systems Department, Graduate School of Management, University of Dallas, Dallas, Texas T.M. RAJKUMAR, Associate Professor, School of Business Administration, Miami University, Oxford, Ohio V. RAMESH, Associate Professor, Kelley School of Business, Indiana University, Bloomington, Indiana C. RANGANATHAN, Assistant Professor, College of Business Administration, University of Illinois-Chicago, Chicago, Illinois VASANT RAVAL, Professor, College of Business Administration, Creighton University, Omaha, Nebraska DREW ROBB, Freelance Writer and Consultant, Los Angeles, California STUART ROBBINS, Founder and CEO, KMERA Corporation; and Executive Director, The CIO Collective, California JOHN F. ROCKART, Senior Lecturer Emeritus, Sloan School of Management, Massachusetts Institute of Technology, Cambridge, Massachusetts JEANNE W. ROSS, Principal Research Scientist, Center for Information Systems Research, Sloan School of Management, Massachusetts Institute of Technology, Cambridge, Massachusetts HUGH W. RYAN, Partner, Accenture, Chicago, Illinois SEAN SCANLON, E-Architect, FCG Doghouse, Huntington Beach, California WILLIAM T. SCHIANO, Professor, Bentley College, Waltham, Massachusetts S. YVONNE SCOTT, Assistant Director, Corporate Information Systems, GATX Corporation, Chicago, Illinois ARIJIT SENGUPTA, Assistant Professor, Kelley School of Business, Indiana University, Bloomington, Indiana NANCY SETTLE-MURPHY, President, Chrysalis International Inc., Boxborough, Massachusetts NANCY C. SHAW, Assistant Professor, School of Management, George Mason University, Fairfax, Virginia JAMES E. SHOWALTER, Consultant, Enterprise Computing, Automotive Industry Business Development, Sun Microsystems, Greenwood, Indiana vii

Information Systems Management Handbook KENG SIAU, Associate Professor, College of Business Administration, University of Nebraska-Lincoln, Lincoln, Nebraska JANICE C. SIPIOR, Associate Professor, College of Commerce and Finance, Villanova University, Villanova, Pennsylvania SUMIT SIRCAR, Professor, Farmer School of Business Administration, Miami University, Miami, Ohio SCOTT SWEENEY, Associate, CB Richard Ellis, Reno, Nevada PETER TARASEWICH, Assistant Professor, Northeastern University, Boston, Massachusetts HEIKKI TOPI, Associate Professor, Bentley College, Waltham, Massachusetts JOHN VAN DEN HOVEN, Senior Technology Advisor, Noranda, Inc., Toronto, Ontario, Canada ROBERT VANTOL, Senior E-Commerce Consultant, Web Front Communications, Toronto, Ontario, Canada ROBERTO VINAJA, Assistant Professor, College of Business Administration, University of Texas Pan American, Edinburg, Texas LES WAGUESPACK, Professor, Bentley College, Waltham, Massachusetts BURKE T. WARD, Professor, College of Commerce and Finance, Villanova University, Villanova, Pennsylvania MERRILL WARKENTIN, Associate Professor, College of Business and Industry, Mississippi State University, Mississippi State, Mississippi JASON WEIR, Senior Researcher, HR.com, Aurora, Ontario, Canada STEVEN M. WILLIFORD, President, Franklin Services Group, Inc., Pataskala, Ohio SUSAN E. YAGER, Assistant Professor, Southern Illinois University-Edwardsville, Edwardsville, Illinois WILLIAM A. YARBERRY, JR., Consultant and Technical Writer, Houston, Texas ROBERT A. ZAWACKI, Professor Emeritus, University of Colorado and President, Zawacki and Associates, Boulder, Colorado MICHAEL ZIMMER, Senior Coordinator, Ministry of Health Services and Ministry of Health Planning, Government of British Columbia, Victoria, British Columbia, Canada

viii

Contents SECTION 1 ACHIEVING STRATEGIC IT ALIGNMENT

.............. 1

STRATEGIC IT CAPABILITIES 1 Assessing IT–Business Alignment. . . . . . . . . . . . . . . . . . . . . . 7 Jerry Luftman 2 IT Capabilities, Business Processes, and Impact on the Bottom Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 William R. King 3 Facilitating Transformations in IT: Lessons Learned along the Journey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Steve Norman and Robert A. Zawacki 4 Strategic Information Technology Planning and the Line Manager’s Role . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Robert L. Heckman 5 Running Information Services as a Business. . . . . . . . . . . . 47 Richard M. Kesner 6 Managing the IT Procurement Process . . . . . . . . . . . . . . . . 73 Robert L. Heckman 7 Performance Metrics for IT Human Resource Alignment . . . 89 Carol V. Brown 8 Is It Time for an IT Ethics Program? . . . . . . . . . . . . . . . . . . 101 Fritz H. Grupe, Timothy Garcia-Jay, and William Kuechler IT LEADERSHIP ROLES 9 The CIO Role in the Era of Dislocation . . . . . . . . . . . . . . . . 111 James E. Showalter 10 Leadership Development: The Role of the CIO . . . . . . . . . 119 Barton S. Bolton 11 Designing a Process-Based IT Organization . . . . . . . . . . . 125 Carol V. Brown and Jeanne W. Ross ix

Information Systems Management Handbook SOURCING ALTERNATIVES 12 Preparing for the Outsourcing Challenge. . . . . . . . . . . . . . 135 N. Dean Meyer 13 Managing Information Systems Outsourcing. . . . . . . . . . . 145 S. Yvonne Scott 14 Offshore Development: Building Relationships across International Boundaries . . . . . . . . . . . . . . . . . . . . . 153 Hamdah Davey and Bridget Allgood 15 Application Service Providers . . . . . . . . . . . . . . . . . . . . . . . 159 Mahesh Raisinghani and Mike Kwiatkowski SECTION 2 DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 MANAGING A DISTRIBUTED COMPUTING ENVIRONMENT 16 The New Enabling Role of the IT Infrastructure . . . . . . . . 175 Jeanne W. Ross and John F. Rockart 17 U.S. Telecommunications Today . . . . . . . . . . . . . . . . . . . . . 191 Nicholas Economides 18 Information Everywhere. . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 Peter Tarasewich and Merrill Warkentin DEVELOPING AND MAINTAINING THE NETWORKING INFRASTRUCTURE 19 Designing and Provisioning an Enterprise Network . . . . . 223 Haywood M. Gelman 20 The Promise of Mobile Internet: Personalized Services . . . 241 Heikki Topi 21 Virtual Private Networks with Quality of Service . . . . . . . 257 Tim Clark 22 Storage Area Networks Meet Enterprise Data Networks . . 269 Lisa M. Lindgren DATA WAREHOUSING 23 Data Warehousing Concepts and Strategies . . . . . . . . . . . 279 Bijoy Bordoloi, Stefan M. Neikes, Sumit Sircar, and Susan E. Yager 24 Data Marts: Plan Big, Build Small . . . . . . . . . . . . . . . . . . . . 301 John van den Hoven 25 Data Mining: Exploring the Corporate Asset . . . . . . . . . . . 307 Jason Weir

x

Contents 26 Data Conversion Fundamentals . . . . . . . . . . . . . . . . . . . . . 315 Michael Zimmer QUALITY ASSURANCE AND CONTROL 27 Service Level Management Links IT to the Business. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 Janet Butler 28 Information Systems Audits: What’s in It for Executives? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341 Vasant Raval and Uma G. Gupta SECURITY AND RISK MANAGEMENT 29 Cost-Effective IS Security via Dynamic Prevention and Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349 Christopher Klaus 30 Reengineering the Business Continuity Planning Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361 Carl B. Jackson 31 Wireless Security: Here We Go Again . . . . . . . . . . . . . . . . . 379 Aldora Louw and William A. Yarberry, Jr. 32 Understanding Intrusion Detection Systems. . . . . . . . . . . 389 Peter Mell SECTION 3 PROVIDING APPLICATION SOLUTIONS

. . . . . . . . . . . . 399

NEW TOOLS AND APPLICATIONS 33 Web Services: Extending Your Web . . . . . . . . . . . . . . . . . . 405 Robert VanTol 34 J2EE versus .NET: An Application Development Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415 V. Ramesh and Arijit Sengupta 35 XML: Information Interchange. . . . . . . . . . . . . . . . . . . . . . . 425 John van den Hoven 36 Software Agent Orientation: A New Paradigm. . . . . . . . . . 435 Roberto Vinaja and Sumit Sircar SYSTEMS DEVELOPMENT APPROACHES 37 The Methodology Evolution: From None, to One-Size-Fits-All, to Eclectic . . . . . . . . . . . . . . . . . . . . . . . . 457 Robert L. Glass 38 Usability: Happier Users Mean Greater Profits . . . . . . . . . 465 Luke Hohmann xi

Information Systems Management Handbook 39 UML: The Good, the Bad, and the Ugly . . . . . . . . . . . . . . . 483 John Erickson and Keng Siau 40 Use Case Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499 Donald R. Chand 41 Extreme Programming and Agile Software Development Methodologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511 Lowell Lindstrom and Ron Jeffries 42 Component-Based IS Architecture . . . . . . . . . . . . . . . . . . . 531 Les Waguespack and William T. Schiano PROJECT MANAGEMENT 43 Does Your Project Risk Management System Do the Job? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545 Richard B. Lanza 44 Managing Development in the Era of Complex Systems. . . .555 Hugh W. Ryan 45 Reducing IT Project Complexity . . . . . . . . . . . . . . . . . . . . . 561 John P. Murray SOFTWARE QUALITY ASSURANCE 46 Software Quality Assurance Activities . . . . . . . . . . . . . . . . 573 Polly Perryman Kuver 47 Six Myths about Managing Software Development . . . . . 581 Linda G. Hayes 48 Ethical Responsibility for Software Development. . . . . . . 589 Janice C. Sipior and Burke T. Ward SECTION 4 LEVERAGING E-BUSINESS OPPORTUNITIES

. . . . . . . . . 599

E-BUSINESS STRATEGY AND APPLICATIONS 49 Building an E-Business Strategy . . . . . . . . . . . . . . . . . . . . . 603 Gary Hackbarth and William J. Kettinger 50 Surveying the E-Landscape: New Rules of Survival . . . . . 625 Ravindra Krovi 51 E-Procurement: Business and Technical Issues . . . . . . . . 637 T.M. Rajkumar 52 Evaluating the Options for Business-to-Business E-Commerce . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651 C. Ranganathan

xii

Contents 53 The Role of Corporate Intranets . . . . . . . . . . . . . . . . . . . . . 663 Diana Jovin 54 Integrating Web-Based Data into a Data Warehouse . . . . 671 Zhenyu Huang, Lei-da Chen, and Mark N. Frolick 55 At Your Service: .NET Redefines the Way Systems Interact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 691 Drew Robb SECURITY AND PRIVACY ISSUES 56 Dealing with Data Privacy Protection: An Issue for the 21st Century . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 697 Fritz H. Grupe, William Kuechler, and Scott Sweeney 57 A Strategic Response to the Broad Spectrum of Internet Abuse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 715 Janice C. Sipior and Burke T. Ward 58 World Wide Web Application Security . . . . . . . . . . . . . . . . 729 Sean Scanlon SECTION 5 FACILITATING KNOWLEDGE WORK

. . . . . . . . . . . . . . . 749

PROVIDING SUPPORT AND CONTROLS 59 Improving Satisfaction with End-User Support . . . . . . . . . 753 Nancy C. Shaw, Fred Niederman, and Joo-Eng Lee-Partridge 60 Internet Acceptable Usage Policies . . . . . . . . . . . . . . . . . . 761 James E. Gaskin 61 Managing Risks in User Computing . . . . . . . . . . . . . . . . . . 771 Sandra D. Allen-Senft and Frederick Gallegos 62 Reviewing User-Developed Applications . . . . . . . . . . . . . . 781 Steven M. Williford 63 Security Actions during Reduction in Workforce Efforts: What to Do When Downsizing . . . . . . . . . . . . . . . . . . . . . . . 799 Thomas J. Bray SUPPORTING REMOTE WORKERS 64 Supporting Telework: Obstacles and Solutions . . . . . . . . 807 Heikki Topi 65 Virtual Teams: The Cross-Cultural Dimension . . . . . . . . . 819 Anne P. Massey and V. Ramesh 66 When Meeting Face-to-Face Is Not the Best Option . . . . . 827 Nancy Settle-Murphy

xiii

Information Systems Management Handbook KNOWLEDGE MANAGEMENT 67 Sustainable Knowledge: Success in an Information Economy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 835 Stuart Robbins 68 Knowledge Management: Coming up the Learning Curve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 843 Ray Hoving 69 Building Knowledge Management Systems . . . . . . . . . . . . 857 Brent J. Bowman 70 Preparing for Knowledge Management: Process Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 873 Richard M. Kesner INDEX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 891

xiv

Introduction The first few years of the new millennium have been a challenging time for the information technology (IT) manager. The initial economic euphoria that greeted the successful completion of Y2K projects worldwide was quickly followed by a dramatic shakedown within U.S.-based industries most closely related to the growth of the Internet. Today, organizations are striving to find innovative ways to leverage in-place IT solutions to improve efficiency and effectiveness under a harsher economic climate. At the same time, technologies that hold the promise of globally ubiquitous access to distributed applications continue to be strong drivers of new business solutions and organizational change. In this competitive environment, it is increasingly important for IT managers to be able to closely align IT investments with the organization’s strategic goals. Both a high-quality IT infrastructure and high-quality IT services are critical for any modern organization to compete. Yet it is also essential for IT leaders to continue to assess new technologies and understand the fundamental issues of how best to integrate modern information technologies — including packaged enterprise systems, Web services, wireless access technologies, peer-to-peer computing, and voice, video, and data communication technologies — to transform the ways that organizations compete and individuals work across geographically dispersed locations. The 70 chapters in this 8th edition of the IS Management Handbook have been selected with the objective of helping our target audience, the practicing IT manager, successfully navigate today’s challenging environment. Guidelines, frameworks, checklists, and other tools are provided for a range of critical IT management topics. In addition to providing readings for our target audience of senior IT leaders, other members of the IT management team, and those consulting on IT management issues, we encourage potential adopters of this Handbook to use it as a resource for IT professional development forums and more traditional academic curricula. The five section themes we have selected for this Handbook are briefly introduced below. xv

Information Systems Management Handbook Section 1: Achieving Strategic IT Alignment Achieving strategic alignment between the IT organization and the business has been a top IT management issue for more than a decade. Achieving alignment should be viewed as a goal to continually aspire to, rather than an end state. Today, the IT investments to be aligned include not only systems investments, but also investments in IT people and IT processes. The three major topics selected for this section are strategic IT capabilities, IT leadership roles, and sourcing alternatives. Section 2: Designing and Operating an Enterprise Infrastructure A reliable and robust IT infrastructure is a critical asset for virtually all organizations. Decisions as to the design, implementation, and ongoing management of the IT infrastructure directly affect the success and viability of modern organizations. Yet, IT infrastructure issues have also become more complex: distributed technologies in general, and the Internet in particular, have blurred the boundaries between organizational systems and the systems of business partners. The five topics covered in this section include managing a distributed computing environment, developing and maintaining the networking infrastructure, data warehousing, quality assurance and control, and security and risk management. Section 3: Providing Application Solutions The development of Web-based, globally distributed application solutions requires skills and methods that are significantly different from those that were required of IT professionals in earlier eras. As client environments have become very diverse and users have come to expect ubiquitous and uninterrupted application availability, the integration between various systems components has become increasingly important. At the same time, security requirements have become increasingly stringent. The four topics covered in this section are new tools and applications, systems development approaches, project management, and software quality assurance. Section 4: Leveraging E-Business Opportunities The dramatic shakedown after the E-commerce boom of the late 1990s has not reduced the importance of the Internet, but may have reduced its speed of growth. For established traditional businesses, supporting E-business has become a vitally important new responsibility as organizations have pursued initiatives that include a mixture of online and offline xvi

Introduction approaches. E-business technologies have become an integral part of most organizations’ development portfolios and operational environments. The two topics covered in this section are E-business strategy and applications, and security and privacy issues. Section 5: Facilitating Knowledge Work Facilitating knowledge work continues to be a critical IS management role. Today’s typical knowledge worker is a computer-savvy user who has little tolerance for downtime and is an increasingly demanding Web user. Today’s technologies also enable working remotely — as a telecommuter or a member of a virtual team. Work teams are also beginning to demand new technologies to support communications and collaboration across geographical boundaries. The three topics covered in this section are providing support and controls, supporting remote workers, and knowledge management. How to Use This Handbook The objective of this handbook is to be a resource for practicing IT managers responsible for managing and guiding the planning and use of information technology within organizations. The chapters provide practical management tools and “food-for-thought” based on the management insights of more than 85 authors who include former CIOs at Fortune 500 companies now in consulting, other practicing IT managers, consultants, and academics who focus on practice-oriented research and best practices. To help our readers find the sections and information nuggets most useful to them, the chapters in this Handbook have been organized under 17 topics that fit into the five section themes introduced above. For those of you interested in browsing readings in a specific IT management area, we suggest becoming familiar with our table of contents first and then beginning your readings with the relevant section introduction at the beginning of each new section. For those interested in gleaning knowledge about a narrower topical area, we recommend a perusal of our alphabetical index at the end of the Handbook.

xvii

This page intentionally left blank

Acknowledgments It has been a privilege for us to work together again as editors of the IS Management Handbook. We believe that our prior experiences working in the IT field, as educators seeking to respond to the curriculum needs of IT leaders, and as researchers tracking industry trends and best practices, position us well for this editorial challenge. We want to extend our sincere thanks to the authors who promptly responded to our requests for new material and worked together with us to develop new intellectual content that is relevant and timely. Each chapter has been reviewed multiple times in pursuit of currency, accuracy, consistency, and presentation clarity. We hope that all the authors are pleased with the final versions of their contributions to this Handbook. We also wish to offer special thanks to our publisher, Richard O’Hanley, for his insightful direction and to our production manager at CRC Press, Claire Miller, for her friendly communications and expertise. We also are grateful to our own institutions for recognizing the importance of faculty endeavors that bridge the academic and practitioner communities: Indiana University and Bentley College. Finally, we appreciate the continued support of our family members, without whose understanding this project could not have come to fruition. As we complete our editorial work for this Handbook, we can only marvel at the speed with which economic, technological, and professional fortunes have risen, and sometimes fallen, within the past decade. We encourage our readers to continue to invest in the professional development of their staffs and themselves, especially during the down cycles, in order to be even better positioned for a future in which IT innovation will continue to be an enabler, and catalyst, for business growth and change. CAROL V. BROWN Indiana University [email protected] HEIKKI TOPI Bentley College [email protected] xix

This page intentionally left blank

Section 1

Achieving Strategic IT Alignment

ACHIEVING STRATEGIC IT ALIGNMENT Achieving strategic alignment between the IS organization and the business has been a top IS management issue for more than a decade. In the past, achieving strategic IT alignment was expected to result primarily from a periodic IT planning process. Today, however, the emphasis is on a continuous assessment of the alignment of IT investments in not only systems, but also IT people and IT processes. Given the high rate of change in today’s hyper-competitive environments, achieving strategic IT alignment needs to also be viewed as a goal to continually aspire to, not necessarily an end state. The 15 chapters in this first section of the Handbook address a large set of IT–business alignment issues, which are organized under three highlevel topics: • Strategic IT capabilities • IT leadership roles • Sourcing alternatives STRATEGIC IT CAPABILITIES Chapter 1, “Assessing IT–Business Alignment,” presents a tool for teams of business and IT managers to reach agreement on their organization’s current state of alignment and to provide a roadmap for improvement. Thirtyeight practices are categorized into six IT–business alignment criteria: communications, competency/value measurement, governance, partnership, technology scope, and skills. According to the author, most executives today rate their organizations between levels 2 and 3 on a five-level maturity curve. Chapters 2 and 3 provide some high-level guidelines for achieving IT–business alignment via strategic IT capabilities. Chapter 2, “IT Capabilities, Business Processes, and Impact on the Bottom Line,” emphasizes the need to focus on IT investments that will result in “bundles” of internally consistent elements that fulfill a business or IT objective. A leading academician, the author argues that IT capabilities primarily impact a company’s bottom line through redesigned business processes. Chapter 3, “Facilitating Transformations in IT: Lessons Learned along the Journey,” focuses on how an IT organization can successfully transform itself into a more flexible consultative model. Based on a previously published model, the authors describe each component of their change model for an IT management context. The chapter concludes with lessons learned based on the authors’ extensive field experiences. The next four chapters are all concerned with developing specific IT capabilities by improving IT processes and metrics. Chapter 4, “Strategic Information Technology Planning and the Line Manager’s Role,” presents 2

ACHIEVING STRATEGIC IT ALIGNMENT an IT planning approach that takes into account two potentially conflicting needs: centralized IT coordination and entrepreneurial IT applications for business units. The author views the roles played by line managers in the IT planning process as critical to achieving IT–business alignment. Chapter 5, “Running Information Services as a Business,” presents a framework and a set of tools for managing the IS department’s service commitments to the organization. For example, the authors provide a comprehensive mapping of all IS services to discrete stakeholder constituencies, as well as templates for capturing business value, project roles, and risks. Chapter 6 presents a process model for IT procurement that has been developed by a Society for Information Management (SIM) working group. The objective of the working group was to impose discipline on a cross-functional process, based on the experiences of a dozen senior IT executives from large North American companies. The model details sub-processes and key issues for three Deployment processes (requirements determination, acquisition, contract fulfillment) and three Management processes (supplier management, asset management, quality management). Chapter 7, “Performance Metrics for IT Human Resource Alignment,” focuses on designing metrics to motivate and reward IT managers and their personnel for the development of a strategic human resource capability within the IT organization. After presenting guidelines for both what to measure and how to measure, a case example is used to demonstrate some best practices in people-related metrics in an organizational context that values IT–business goal alignment. The final chapter under this topic, “Is It Time for an IT Ethics Program?,” provides specific guidelines for developing an ethics program to help IT employees make better decisions. Given the recent publicity on corporate scandals due to unethical behavior within U.S. organizations, the chapter’s title deserves a resounding “Yes” response: an IT ethics program appears to be a relatively inexpensive and totally justifiable investment. IT LEADERSHIP ROLES The three chapters on IT leadership topics share ideas based on many years of personal experience by IT leaders. Chapter 9, “The CIO in the Era of Dislocation,” is based on the author’s insights as a former CIO and now a consultant with regular access to thought leaders in the field. Specifically, the author argues that entrepreneurial leadership is required in today’s networked world. The new era of pervasive computing and dislocating technologies and its meaning for the CIO role are described. As a former IT manager and a facilitator of leadership development programs, the author 3

ACHIEVING STRATEGIC IT ALIGNMENT of Chapter 10, “Leadership Development: The Role of the CIO,” argues that the departing CIO’s legacy is not the IT infrastructure left behind, but rather the IT leadership capability left behind. This chapter was crafted with the objective of helping IS leaders understand their own leadership styles, which is the first step toward helping develop other leaders. Chapter 11, “Designing a Process-Based IT Organization,” summarizes the organization design innovations of a dozen highly regarded IT leaders striving to develop more process-based IT organization. The authors synthesize these research findings into four IT processes and six IT disciplines that characterize the early 21st-century process-based IT organization. Common challenges faced by the IT leaders who were interviewed are also described, along with some of their initial solutions to address them. SOURCING ALTERNATIVES Since the landmark Kodak outsourcing contract, the trade-offs between internal and external sourcing for core and non-core IT functions have been widely studied. Chapter 12, “Preparing for the Outsourcing Challenge,” provides useful guidelines for preventing a bad outsourcing decision. Detailed methods to facilitate fair service and cost comparisons between internal staff and outsourcing vendors are provided. Chapter 13, “Managing Information Systems Outsourcing,” discusses the key components of outsourcing agreements from a client perspective. As the author points out, good contractual agreements are the first step toward effective management and control of IS/IT outsourcing arrangements. The final two chapters on the alternative sourcing topic discuss IT management issues associated with two new outsourcing options: (1) “offshore” outsourcing (IT work managed by organizations in other countries) and (2) application service providers (ASPs). Chapter 14, “Offshore Development: Building Relationships across International Boundaries,” provides useful “food-for-thought” about how to build effective relationships with IT workers managed by an outsourcing firm located in a country that is different from the client’s firm. Using a case example of a client firm based in the United Kingdom and an outsourcer in India, the authors describe some of the challenges encountered due to lack of knowledge about the client firm’s business context and socio-cultural differences, as well as some suggestions for how they can be successfully addressed. ASPs provide Internet-based hosting of packaged or vendor-customized applications. Particularly if the ASP is a third-party provider, their valueadded services may include software integration as well as hosting services. A composite of content from two related articles published by the same authors, Chapter 15, “Application Service Providers,” presents the 4

ACHIEVING STRATEGIC IT ALIGNMENT potential benefits of ASP arrangements (including the support of virtual organizations) and some of the infrastructure challenges. The discussion of service level agreements emphasizes the key differences between SLAs with an external ASP versus SLAs with an internal IT organization.

5

This page intentionally left blank

Chapter 1

Assessing IT–Business Alignment Jerry Luftman

Alignment is the perennial business chart-topper on top-ten lists of IT issues. Educating line management on technology’s possibilities and limitations is difficult; so is setting IT priorities for projects, developing resources and skills, and integrating systems with corporate strategy. It is even tougher to keep business and IT aligned as business strategies and technology evolve. There is no silver-bullet solution, but achieving alignment is possible. A decade of research has found that the key is building the right relationships and processes, and providing the necessary training. What follows is a methodology developed by the author for assessing a company’s alignment. Modeled after the Capability Maturity Model® developed by Carnegie Mellon’s Software Engineering Institute, but focused on a more strategic set of business practices, this tool has been successfully tested at more than 50 Global 2000 companies and is currently the subject of a benchmarking study sponsored by the Society for Information Management and The Conference Board.1 The primary objective of the assessment is to identify specific recommendations for improving the alignment of IT and the business. ALIGNMENT CATEGORIES The tool has six IT–business alignment criteria, or maturity categories, that are included in each assessment: 1. 2. 3. 4. 5. 6.

Communications Maturity Competency/Value Measurements Maturity Governance Maturity Partnership Maturity Technology Scope Maturity Skills Maturity

0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

7

ACHIEVING STRATEGIC IT ALIGNMENT Each maturity category is discussed below. A list of specific practices for each of the six alignment criteria can be found in Exhibit 1. Communications Maturity Effective exchange of ideas and a clear understanding of what it takes to ensure successful strategies are high on the list of enablers and inhibitors to alignment. Too often there is little business awareness on the part of IT or little IT appreciation on the part of the business. Given the dynamic environment in which most organizations find themselves, ensuring ongoing knowledge sharing across organizations is paramount. Many firms choose to draw on liaisons to facilitate this knowledge sharing. The keyword here is “facilitate.” This author has often seen facilitators whose role becomes serving as the sole conduit for interaction among the different organizations. This approach tends to stifle, rather than foster, effective communications. Rigid protocols that impede discussions and the sharing of ideas should be avoided. Competency/Value Measurements Maturity Too many IT organizations cannot demonstrate their value to the business in terms that the business understands. Frequently, business and IT metrics of value differ. A balanced “dashboard” that demonstrates the value of IT in terms of contribution to the business is needed. Service levels that assess IT’s commitments to the business often help. However, the service levels must be expressed in terms that the business understands and accepts. The service levels should be tied to criteria that clearly define the rewards and penalties for surpassing, or missing, the objectives. Frequently, organizations devote significant resources to measuring performance factors. However, they spend much less of their resources on taking action based on these measurements. For example, requiring a return on investment (ROI) before a project begins, but not reviewing how well objectives were met after the project was deployed, provides little value to the organization. It is important to continuously assess the performance metrics criteria to understand (1) the factors that lead to missing the criteria and (2) what can be learned to improve the environment. Governance Maturity The considerations for IT governance include how the authority for resources, risk, conflict resolution, and responsibility for IT is shared among business partners, IT management, and service providers. Project selection and prioritization issues are included here. Ensuring that the appropriate business and IT participants formally discuss and review the 8

Exhibit 1.

Alignment Criteria Level 5: Optimal Process (Complete Alignment)

Alignment Criterion: Communications Maturity

Level 1: With Process (No Alignment)

Understanding of Business by IT Understanding of IT by Business

IT management lacks Limited Good understanding understanding understanding by IT by IT management management Managers lack Limited understanding Good understanding understanding by managers by managers

Organizational Learning

Casual conversation and meetings

Newsletters, reports, group e-mail

Style and Ease of Access

Business to IT only; formal

One-way, somewhat informal

Leveraging Intellectual Assets

Ad hoc

Some structured sharing emerging

Structured around key processes

Formal sharing at all levels

Formal sharing with partners

IT–Business Liaison Staff

None or use only as needed

Primary IT–Business link

Facilitate knowledge transfer

Facilitate relationship building

Building relationship with partners

Level 2: Beginning Process

Level 3: Establishing Process

Understanding encouraged among IT staff Understanding encouraged among staff Formal methods sponsored by senior management Two-way, somewhat informal

Understanding required of all IT staff Understanding required of staff Learning monitored for effectiveness Two-way, informal and flexible

9

Assessing IT–Business Alignment

Training, departmental meetings Two-way, formal

Level 4: Improved Process

Alignment Critera (continued)

Alignment Criterion: Competency/Value Measurements Maturity

Level 1: With Process (No Alignment)

Level 2: Beginning Process

IT Metrics

Technical only

Business Metrics

IT investments measured rarely, if ever Value of IT investments rarely measured Use sporadically

Technical cost; metrics Review, act on rarely reviewed technical, ROI metrics Cost/unit; rarely Review, act on ROI, reviewed cost

Link between IT and Business Metrics Service Level Agreements Benchmarking

Seldom or never

Formally Assess IT Investments

Do not assess

Continuous Improvement Practices

None

Business, IT metrics not linked With units for technology performance Sometimes benchmark informally Only when there is a problem Few; effectiveness not measured

Level 3: Establishing Process

Business, IT metrics becoming linked

Level 4: Improved Process Also measure effectiveness Also measure customer value

Formally linked; reviewed and acted upon With units; becoming Enterprisewide enterprisewide May benchmark formally, seldom act Becoming a routine occurrence Few; starting to measure effectiveness

Level 5: Optimal Process (Complete Alignment) Also measure business ops, HR, partners Balanced scorecard, includes partners Balanced scorecard, includes partners Includes partners

Routinely benchmark, Routinely benchusually act mark, act on, and measure results Routinely assess and Routinely assess, act on findings act on, and measure results Many; frequently Practices and meameasure sures well-estabeffectiveness lished

ACHIEVING STRATEGIC IT ALIGNMENT

10 Exhibit 1.

Exhibit 1.

Alignment Critera (continued)

Alignment Criterion: Governance Maturity

Level 1: With Process (No Alignment)

Formal Business Strategy Planning

Not done, or done as At unit functional level, Some IT input and needed slight IT input cross-functional planning Not done, or done as At unit functional level, Some business input needed light business input and cross-functional planning Centralized or Central/decentral; Central/decentral decentralized or Federal some collocation

Formal IT Strategy Planning Organizational Structure

CIO reports to CFO

How IT is Budgeted

Level 3: Establishing Process

CIO reports to COO

Level 4: Improved Process

Level 5: Optimal Process (Complete Alignment)

At unit and enterprise, with IT

With IT and partners

At unit and enterprise, with business Federal

With partners

Federal

11

CIO reports to COO or CEO

CIO reports to CEO

Cost center, spending Cost center by unit is unpredictable

IT treated as investment

Profit center

Rationale for IT Spending

Reduce costs

Process driver, strategy enabler

Competitive advantage, profit

Senior-Level IT Steering Committee

Do not have

Meet informally as needed

Formal committees meet regularly

Proven to be effective

Also includes external partners

How Projects Are Prioritized

React to business or IT need

Determined by IT function

Determined by business function

Mutually determined

Partners’ priorities are considered

CIO reports to CFO

Some projects treated as investments Productivity, efficiency Also a process enabler

Assessing IT–Business Alignment

Reporting Relationships

Level 2: Beginning Process

Alignment Criterion: Partnership Maturity

Level 1: With Process (No Alignment)

Level 2: Beginning Process

Level 3: Establishing Process

Level 4: Improved Process

Business Perception of IT

Cost of doing business

Becoming an asset

Enables future business activity

Drives future business activity

IT’s Role in Strategic Business Planning Shared Risks and Rewards

Not involved

Enables business processes IT takes all the risks, IT takes most risks receives no rewards with little reward

Managing the IT–Business Relationship Relationship/Trust Style

IT–business Managed on an ad relationship is not hoc basis managed Conflict and mistrust Transactional relationship

Business Usually none Sponsors/Champions

Often have a senior IT sponsor or champion

Partner with business in creating value Enables or drives IT, business adapt business strategy quickly to change Risks, rewards always Managers incented shared to take risks

Drives business processes IT, business start sharing risks, rewards Processes exist but Processes exist and not always complied with followed IT becoming a valued Long-term service provider partnership IT and business sponsor or champion at unit level

Level 5: Optimal Process (Complete Alignment)

Business sponsor or champion at corporate level

Processes are continuously improved Partner, trusted vendor or IT services CEO is the business sponsor or champion

ACHIEVING STRATIEGIC IT ALIGNMENT

12 Exhibit 1. Alignment Critera (continued)

Exhibit 1. Alignment Critera (continued) Level 1: With Process (No Alignment)

Level 2: Beginning Process

Level 3: Establishing Process

Level 4: Improved Process

Primary Systems

Cost of doing business

Becoming an asset

Enables future business activity

Drives future business activity

Standards

Not involved

Architectural Integration

IT takes all the risks, receives no rewards IT–business relationship is not managed

Enables business processes IT takes most risks with little reward

Drives business processes IT, business start sharing risks, rewards Processes exist but Processes exist and not always followed are complied with

How IT Infrastructure is Perceived

Managed on an ad hoc basis

Level 5: Optimal Process (Complete Alignment)

Partner with business in creating value Enables or drives IT, business adapt business strategy quickly to change Risks, rewards always Managers incented shared to take risks Processes are continuously improved

13

Assessing IT–Business Alignment

Alignment Criterion: Technology Scope Maturity

Alignment Critera (continued) Level 5: Optimal Process (Complete Alignment)

Alignment Criterion: Skills Maturity

Level 1: With Process (No Alignment)

Level 2: Beginning Process

Innovative, Entrepreneurial Environment

Discouraged

Somewhat encouraged Strongly encouraged at unit level at unit level

Also at corporate level

Key IT HR Decisions Made by:

Top business and IT management at corporate Tend to resist change Job transfers rarely occur

Same, with emerging functional influence Change readiness programs emerging Occasionally occur within unit

Top business and IT Top management management across across firm and firm partners Programs in place at Also proactive and corporate level anticipate change Regularly occur at all Also at corporate level unit levels

Cross-Functional Training and Job Rotation

No opportunities

Decided by units

Social Interaction

Minimal IT–business interaction

Strictly a business-only Trust and confidence Trust and confidence relationship is starting achieved

Attract and Retain Top Talent

No retention program; poor recruiting

IT hiring focused on technical skills

Change Readiness Career Crossover Opportunities

Level 3: Establishing Process

Top business and unit management; IT advises Programs in place at functional level Regularly occur for unit management

Level 4: Improved Process

Formal programs run Also across enterby all units prise

Technology and business focus; retention program

Also with partners

Also with partners

Attained with customers and partners Formal program for Effective program hiring and retaining for hiring and retaining

ACHIEVING STRATEGIC IT ALIGNMENT

14 Exhibit 1.

Assessing IT–Business Alignment priorities and allocation of IT resources is among the most important enablers (or inhibitors) of alignment. This decision-making authority needs to be clearly defined. Partnership Maturity The relationship that exists among the business and IT organizations is another criterion that ranks high among the enablers and inhibitors of alignment. Giving the IT function the opportunity to have an equal role in defining business strategies is obviously important. However, how each organization perceives the contribution of the other, the trust that develops among the participants, ensuring appropriate business sponsors and champions of IT endeavors, and the sharing of risks and rewards are all major contributors to mature alignment. This partnership should evolve to a point where IT both enables and drives changes to both business processes and business strategies. Naturally, this demands having a clearly defined vision shared by the CIO and CEO. Technology Scope Maturity This set of criteria assesses the extent to which IT is able to: • Go beyond the back office and the front office of the organization • Assume a role supporting a flexible infrastructure that is transparent to all business partners and customers • Evaluate and apply emerging technologies effectively • Enable or drive business processes and strategies as a true standard • Provide solutions customizable to customer needs Skills Maturity This category encompasses all IT human resource considerations, such as how to hire and fire, motivate, train and educate, and culture. Going beyond the traditional considerations such as training, salary, performance feedback, and career opportunities, there are factors that include the organization’s cultural and social environment. For example, is the organization ready for change in this dynamic environment? Do individuals feel personally responsible for business innovation? Can individuals and organizations learn quickly from their experience? Does the organization leverage innovative ideas and the spirit of entrepreneurship? These are some of the important conditions of mature organizations. LEVELS OF ALIGNMENT MATURITY Each of the six criteria described above has a set of attributes that allow particular dimensions (or practices) to be assessed using a rating scheme 15

ACHIEVING STRATEGIC IT ALIGNMENT of five levels. For example, for the practice “Understanding of business by IT” under the Communications Maturity criterion, the five levels are: Level 1: IT management lacks understanding Level 2: Limited understanding by IT management Level 3: Good understanding by IT management Level 4: Understanding encouraged among IT staff Level 5: Understanding required of all IT staff It is important to have both business and IT executives evaluate each of the practices for the six maturity criteria. Typically, the initial review will produce divergent results, and this outcome is indicative of the organization’s alignment problems and opportunities being addressed. The objective is for the team of IT and business executives to converge on a maturity level. Further, the relative importance of each of the attributes for each maturity criterion may differ among organizations. For example, in some organizations, the use of SLAs (service level agreements), which is a practice under the Competency/Value Measurements Maturity criterion, may not be considered as important to alignment as the effectiveness of IT–business liaisons, which is a practice under the Communications Maturity criterion. Assigning the SLA practice a low maturity assessment should not significantly impact the overall rating. However, it is still valuable for the assessment team to discuss why a particular attribute (in this example, SLAs) is less significant than another attribute (liaisons). After each practice is assessed, an average score for the evaluation team is calculated for each practice, and then an average category score is determined for each of the six criteria (see Exhibit 2). The evaluation team then uses these scores for each criterion to converge on an overall assessment of the IT alignment maturity level for the firm (see below). The next higher level of maturity is then used as a roadmap to identify what the firm should do next. A trained facilitator is typically needed for these sessions. ASSESSING YOUR ORGANIZATION This rating system will help you assess your company’s level of alignment. You will ultimately decide which of the following definitions best describes your business practices. Each description corresponds to a level of alignment, of which there are five:

16

Level 1: Without Process (no alignment) Level 2: Beginning Process Level 3: Establishing Process Level 4: Improved Process Level 5: Optimal Process (complete alignment)

Assessing IT–Business Alignment Exhibit 2.

Tally Sheet Averaged Scores

Practice Categories

Practices

Communications 1 2 3 4 5 6 Competency/ 7 Value 8 Measurements 9 10 11 12 13 Governance

14 15 16 17 18 19 20 21

Partnership

22 23 24 25 26 27

Technology Scope

28 29 30 31

1 1.5 2 2.5 3 3.5 4 4.5 5

Average Category Score

Understanding of business by IT Understanding of IT by business Organizational learning Style and ease of access Leveraging intellectual assets IT–business liaison staff IT metrics Business metrics Link between IT and business metrics Service level agreements Benchmarking Formally assess IT investments Continuous improvement practices Formal business strategy planning Formal IT strategy planning Organizational structure Reporting relationships How IT is budgeted Rationale for IT spending Senior-level IT steering committee How projects are prioritized Business perception of IT IT’s role in strategic business planning Shared risks and rewards Managing the IT–business relationship Relationship/trust style Business sponsors/ champions Primary systems Standards Architectural integration How IT infrastructure is perceived

17

ACHIEVING STRATEGIC IT ALIGNMENT Exhibit 2. Tally Sheet (continued) Averaged Scores Practice Categories Skills

Practices 32 33 34 35 36 37 38

1 1.5 2 2.5 3 3.5 4 4.5 5

Average Category Score

Innovative, entrepreneurial environment Key IT HR decisions made by: Change readiness Career crossover opportunities Cross-functional training and job rotation Social interaction Attract and retain top talent

Your Alignment Score:

Level 1 companies lack the processes and communication needed to attain alignment. In Level 5 companies, IT and other business functions (marketing, finance, R&D, etc.) adapt their strategies together, using fully developed processes that include external partners and customers. Organizations should seek to attain, and sustain, the fifth and highest level of alignment. Conducting an assessment has the following four steps: 1. Form the assessment team. Create a team of IT and business executives to perform the assessment. Ten to thirty executives typically participate, depending on whether a single business unit or the entire enterprise is being assessed. 2. Gather information. Team members should assess each of the 38 alignment practices and determine which level, from 1 to 5, best matches their organization (see Exhibit 1). This can be done in three ways: (1) in a facilitated group setting, (2) by having each member complete a survey and then meeting to discuss the results, or (3) by combining the two approaches (e.g., in situations where it is not possible for all group members to meet). 3. Decide on individual scores. The team agrees on a score for each practice. The most valuable part of the assessment is not the score, but understanding its implications for the entire company and what needs to be done to improve it. An average of the practice scores is used to determine a category score for each of the six criteria (see Exhibit 2). 4. Decide on an overall alignment score. The team reaches consensus on what overall level to assign the organization. Averaging the cate18

Assessing IT–Business Alignment gory scores accomplishes this, but having dialogue among the participants is extremely valuable. For example, some companies adjust the alignment score because they give more weight to particular practices. The overall alignment score can be used as a benchmarking aid to compare with other organizations. Global 1000 executives who have used this tool for the first time have rated their organizations, on average, at Level 2 (Beginning Process), although they typically score at Level 3 for a few alignment practices. CONCLUSION Achieving and sustaining IT–business alignment continues to be a major issue. Experience shows that no single activity will enable a firm to attain and sustain alignment. There are too many variables. The technology and business environments are too dynamic. The strategic alignment maturity assessment tool provides a vehicle to evaluate where an organization is, and where it needs to go, to attain and sustain business–IT alignment. The careful assessment of a firm’s IT–business alignment maturity is an important step in identifying the specific actions necessary to ensure that IT is being used to appropriately enable or drive the business strategy. Note 1. See also Jerry Luftman, editor, Competing in the Information Age: Align in the Sand, Oxford University Press, 2003; and Jerry Luftman, Managing the IT Resource, Prentice Hall, 2003.

19

This page intentionally left blank

Chapter 2

IT Capabilities, Business Processes, and Impact on the Bottom Line William R. King

During the 1990s, a great deal of attention was paid to the “productivity paradox” — the phenomenon that, for the U.S. economy, while huge business investments were being made in IT, no corresponding improvements in productivity were detectable. Since the early 1990s, the paradox has been debunked on technical grounds involving such things as its reliance on government-provided productivity data, its failure to consider increased consumer benefits from IT, etc. Attempts to trace IT investments to their impact on the bottom line have also generally not proved to be fruitful, presumably because so many factors affect overall profitability that it is impractical to isolate the effect of one of them — IT. However little empirical evidence has existed for the impact of IT, U.S. firms have continued to invest heavily in IT. Now, more than 50 percent of the total annual capital investment of U.S. firms is in IT, and IT “success stories” continue to proliferate. However, while business managers have obviously not been deterred by practitioners of the dismal science, they have received little guidance as well. Now, some research results are beginning to appear that have the prospect of providing such guidance. IT INVESTMENT VERSUS IT CAPABILITIES One of the explanations for the productivity paradox’s apparent failure is that IT investments have often been used as a surrogate measure for IT 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

21

ACHIEVING STRATEGIC IT ALIGNMENT capabilities. Clearly, a firm does not enhance the effectiveness of its IT merely by throwing money at IT. Rather, the possibility that IT will enhance business performance is through the development of efficacious IT capabilities, which include the hardware and software, the shared services that are provided by the IT function, and the IT management and organizational capacities — such as IT planning, software development skills, etc. — that bind the hardware and software to the services. IT capabilities are bundles of internally consistent elements that are focused toward the fulfillment of an IT or business objective. Without such focus on a capability, the organization may make IT expenditures in a fragmented manner. For example, until quite recently, many rather sophisticated commercial banks had computerized checking account systems, loan systems, and credit card systems that were not integrated to focus on fulfilling a variety of customer needs. These disparate systems were operationally effective, but their contribution to the bottom line was limited because they could not be used to fullest advantage to identify potential customers and to enable the development of closer relationships with customers. If a firm just invests in IT rather than in IT capabilities, it is likely to merely be acquiring IT components — primarily hardware, software, and vendor-provided services — that it may not really understand and may not be capable of fully utilizing to achieve business goals. If, on the other hand, it develops IT capabilities — sophisticated packages of hardware, software, shared services, human skills, and organizational processes — that are focused toward specific business goals, it is far more likely to be able to effectively employ the resources to impact profitability. For example, an investment in IT planning may enable better decisions to be made concerning hardware and software and how they may be best used to fulfill organizational needs. The overall package of hardware, software, planning capacity, and services that can result may rather directly impact the bottom line, whereas expenditures for new software or services may not do so because of the organization’s lack of an overall framework for deciding what is needed, what priorities are associated with each “need,” and when the organization may be ready to effectively utilize these expenditures. Part of the explanation for the failure of the productivity paradox has to do with the wide variability in the abilities of business to create IT capabilities rather than to merely spend money on IT. Some firms have created such capabilities and have therefore made wise IT expenditures. Others have continued the early-computer-era practice of buying the latest tech22

IT Capabilities, Business Processes, and Impact on the Bottom Line nology without a comprehensive plan as to how that technology might be most effectively employed in achieving business goals, and as a result they have been less successful. IT AND BUSINESS PROCESSES Research results, including a study that I conducted with Dr. Weidong Xia of the University of Minnesota, have begun to emerge that demonstrate that the primary mechanisms through which IT capabilities impact overall business performance are through the business processes. This result is consistent with the emphasis given in the past decade to the “balanced scorecard” as a measurement tool for assessing performance. The balanced scorecard deemphasizes overall financial measures and provides indices of progress toward the achievement of business process goals such as improved quality, increased customer satisfaction, and reduced cycle time. Now, these results in IT research demonstrate that it is in the improvements that can be made in such measures of business process performance through IT that the business impact of IT can be most directly felt. This is a basic premise of business process reengineering (BPR), which is intuitively appealing and widely applied (even if the BPR terminology is now somewhat dated) but which is not broadly studied and verified. In effect, proponents of BPR have argued that redesigned business processes are needed so that the inertia of the old way of doing things can be wrung out of the processes. These results from IT research can be interpreted to say that the old technologies may be similarly “wrung out” of existing old processes, not merely by replacing old technologies with new ones, but by doing zerobased process redesign on the basis of a new look at process goals and alternative ways of performing the process as well as alternative new technologies and alternative organizational forms (such as strategic alliances and outsourcing). Companies that have successfully developed IT capabilities are, ironically, in the best position to use non-IT solutions, such as alliances, in business process improvements because they are able to recognize the limits of new technologies and to focus on the best way of achieving the business goals of the business processes. The emphasis on business processes as targets for IT investments should incorporate the notion of “real options” — that is, the idea that in making an IT investment, one is not only purchasing the immediate benefit, but is either acquiring or foreclosing future options. This idea is crucial to 23

ACHIEVING STRATEGIC IT ALIGNMENT that of IT capabilities as well as in the determination of the business process benefits that may be derived from an IT investment. The simplest illustration of real options is in terms of the scalability of IT resources. If resources are scalable, some future options are preserved; if not, they may be foreclosed. Guidance to Practitioners The focus on IT capabilities and the influence that they exert through business processes leads to a number of guidelines for managers: • IT should avoid the image and reality of always recommending “throwing money” at the latest technologies. • Rather, the focus of IT investments should be on developing explicit IT capabilities that are “bundles” of hardware, software, shared services, management practices, technical and managerial skills, etc. • The soft side of IT capabilities is as important as the hard side, so that management should consider investments in IT planning and in development methodologies as carefully as they consider new hardware and software investments. • The real option value of IT investments — for example, the future options that are either made possible or foreclosed — need to be considered in developing an IT capability and in redesigning E-business processes. • IT should focus on impacting the business through impacting key business processes because, if the emphasis is on balanced scorecard measures, eventually these impacts will flow to the bottom line. • Correspondingly, while an emphasis on IT impacting profitability can lead to IT never really being held accountable, the focus on business process measures is more readily assessed and more easily attributable to the success or failure of IT. • In considering the redesign of business processes, IT should give attention to new organizational arrangements, such as alliances and outsourcing, rather than concentrating solely on IT solutions.

24

Chapter 3

Facilitating Transformations in IT: Lessons Learned along the Journey Steve Norman Robert A. Zawacki

The rate of change today is higher than it ever has been in the past. There is now more pressure on information technology (IT) professionals than ever before. IT firms or departments must do more with less, and must do so quicker than ever before given today’s competitive environments. Given these factors, it is even more critical today that IT companies employ successful change strategies and processes. If a company cannot quickly and successfully adapt to change, it is destined to fail. The purpose in writing this chapter is to propose a model that will allow for successful change in IT firms. This model is reinforced by more than 30 years of research and personal experiences, and has proven successful in many of today’s top IT firms. CONTINUING TURBULENCE The turbulence in which companies operate today has reached peak levels. Since 1995, mergers and acquisitions have increased in absolute numbers and size. Those mergers permit economies of scale, which translates to more and larger workforce reductions (the projections were that the numbers for 2001 would be the highest ever). Further, organizations are

0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

25

ACHIEVING STRATEGIC IT ALIGNMENT locked in a global struggle over time-based competition, cost-effectiveness, even better customer service, and the need to be innovative while being flexible. As a consequence, the trust between management and individual contributors is at an all-time low.2 The outcome of this perceived random change is a high need by IT leaders to transform their organizations from the old bureaucratic control model to a more flexible consultative model. In the authors’ opinion, which has been proven correct by many companies, the organizational model that permits people to best respond to the above drivers of change is the “learning organization.” Not only is this model flexible, it is also scalable, enabling it to be implemented in many different organizations, regardless of size. This model is also called the STAR organization.3 It has many synonyms, such as the high velocity environment, the ad hoc organization, the shamrock organization, and even the expression “violent implementation,” to describe a development department’s strategy in response to timebased competition. Regardless of what it is called, an outcome of this continuous/discontinuous change is an expressed need by IT leaders4 for strategic alignment with the business, and with a renewed focus on strategy and tactics. Because of time-based competition and cost-effectiveness, IT leaders must do more with less and even quicker than before. CREATING THE BURNING PLATFORM “The need for a transformation stems from environmental turbulence that can render current organizational practices valueless. To respond, leaders must transform their organizations.”5 The transformation to the learning (STAR) organization is a new paradigm resulting in a new synergy that assists people in responding more quickly to the drivers of change than their competitors. The four main drivers of change are:6 1. Even better customer service: both internal and external. 2. Cost-effectiveness: a firm cannot, in the long run, cut costs and have sustained customer service and growth (we went through a period of cost reduction and confirmed this!). 3. Time-based competition: the firm that gets to the market first with a new product/service has a temporary monopoly, which is rewarded by the marketplace and by the stock market. 4. Innovation and flexibility: the organization of the future simply must learn faster and adapt faster than its competitors. 26

Facilitating Transformations in IT: Lessons Learned along the Journey THE MODEL FOR ALIGNMENT AND FOCUS Through coaching various IT leaders through their transformations toward learning organizations, and through personal observation, it was realized that there was a need to have a process that helped an organization’s people understand why they had to launch the transformation. A further need was to understand how to implement the transformation. This circular process is the key to rebuilding trust. When people feel trusted and valued, they add value to the customer and to the bottom line of the income statement. The search for a model to help understand this dual model of why and how led us to an article by Nutt and Backoff of Ohio State University, which Zawacki included in his Organizational Development book (5th edition, see Note 2). We then modified their model to make it a better fit with our research and the IT environment. Several clients have said that the model helped them understand their turbulent environment and also gave them a roadmap through the transformation. The modified model appears in Exhibit 1. Moving up the ladder in the model from the bottom addresses the why questions for the various levels in an organization. For example, for programmers or software engineers, the organizational transformation should reduce their job stress, improve their quality of work life, and give them empowerment (autonomy) with better processes and a clearer vision. Conversely, to implement this transformation (how), the IT leadership team must begin with a clear vision and values, and then must examine all of its processes, people empowerment, etc. VISION, VALUES, STRATEGIC OBJECTIVES, AND NEW BEHAVIORS Establishing the organizational vision, values, strategic objectives, and behaviors should be a collaborative process that increases the opportunity for organizational success.7 By making this process a collaborative one, you increase buy-in and, thus, ownership in the resulting vision, values, strategic objectives, and behaviors. When there is true organizational ownership in these, there is more commitment to the success of the organization. This resulting commitment greatly enhances the organization’s chances for success. Unfortunately, it is our experience that many IT executives have a desire to skip this step because of its “fuzzy” nature and because of the high degree of difficulty in defining the future. After going through this process, we had one very effective executive vice-president say, “If I hear that vision word again, you are out of here!” As stated, this new paradigm must begin with a clear vision, which must align with the company’s strategic objectives, and which then must result in new values and behaviors. The importance of a clear vision and corre27

ACHIEVING STRATEGIC IT ALIGNMENT

9LVLRQ

9DOXHV

6WUDWHJLF 2EMHFWLYHV

5HYLHZ 3URFHVVHV

1HZ %HKDYLRUV

(PSRZHUPHQW RI3HRSOH 7UXVW

7UXVWZRUWK\ ,QWHJULW\

:RUN&OLPDWH DQG&XOWXUH

0HDQLQJIXO :RUN

,QFUHDVHG &RPPLWPHQWDQG 3URGXFWLYLW\

Exhibit 1.

Functional Specification Analysis Precursors

sponding values cannot be overstated. A clear vision and values are the stakes in the ground that associates hang onto, and that also pulls them toward the future when “turf” conflicts begin to surface during a planned transformation. This vision must also be exciting, engaging, and inspiring. It must have all of these qualities to truly get the energy of the organization around it. If it fails to exhibit these qualities, the organization simply will not have the sustained effort required to succeed. After establishing such a vision, the IT leadership team must then clearly articulate the new supporting values and behaviors of the STAR organization to all levels of the organization. The entire organization must clearly understand what the new vision, values, and behaviors are so that everyone knows what is expected of them. In addition, the leadership team 28

Facilitating Transformations in IT: Lessons Learned along the Journey must be sure to keep communicating the vision, values, and behaviors to reinforce the message. It is also critical that the leadership team look for early opportunities to positively reinforce the desired behaviors when exhibited. In addition, incentive and reward programs that support the new vision, values, and behaviors must quickly be put in place to capitalize on early victories and to capture and gain momentum. Of course, new values must be differentiated from old (mature) values and the focus must be on the new values. Some examples of mature and new values and the consequences (outcomes) of each are described in Exhibit 2. REVIEW PROCESSES After establishing the vision, values, and behaviors, the IT leadership team must review all processes to be sure that they support and enhance the learning organization. Every process must map directly back into the vision, values, and behaviors that were established. Otherwise, the process is merely overhead and should be reexamined (and possibly eliminated). For example, does the joining-up process include the interview, offer, in-processing, sponsorship, coaching, training, and the tactical goals and resources to do the job in the new organization? If so, great! If not, this should be immediately reexamined and revamped. Another key success factor is the priority-setting process for projects. It is our experience that if the prioritization process does not involve the business partners to the degree necessary, the CIO is set up for failure. Usually, the business partners negotiate their projects with the CIO and then the CIO is put in a position of setting priorities between the various business units. Unless the business partners (as a team) set the priorities for the entire organization, the CIO is doomed to fail, given limited resources and reduced cycle time of projects, because of the time-based competition in the market. Finally, metrics must be a key part of the process examination. Of course, the metrics used must be valid and must measure what they are supposed to measure. The metrics used must also be carefully examined to be sure they parallel the organizational direction. We use a metrics package titled “360 Degree Benchmarking” that includes measures for five critical areas: human resources/culture, software development, network platform services, data centers, and enterprise IT investments.8 Unfortunately, it has been our experience that many IT leaders resist metrics for fear of the unknown (or for fear of being held accountable to them!). However, metrics and baseline measures are critical because they permit the leadership team to demonstrate the value of the transformation process to the CEO. 29

ACHIEVING STRATEGIC IT ALIGNMENT Exhibit 2.

Consequences of Mature and New Values

Mature Values

Manifestations or Outcomes

Little personal investment in IT vision, values, and objectives People need a leader to direct them Keep the boss happy

They are the leader’s values, not mine

If something goes wrong, blame someone else Do not make waves Tomorrow will be just like today

People do not like change/they like security New Values Vision, values, and objectives are shared and owned by all IT individual contributors and business units People are capable of managing themselves within the vision, values, and objectives Keep the customer happy The buck stops here Make waves Nobody knows what tomorrow will bring Although random change upsets our behavior patterns, we learn and adjust

Hierarchy of authority and control Real issues do not surface at meetings Appeal procedures become over-formalized Innovation is not widespread but in the hands of a few technologists People swallow their frustrations: “I can’t do anything — it’s leadership’s responsibility to save the ship.” Job descriptions, division of labor, and little empires Manifestations or Outcomes These are my values

Hierarchy is replaced with selfdirected teams Customer-driven performance People address real problems and find synergistic solutions Waves result in innovation Constant learning to prepare for the unknown future Change is an opportunity to grow

Note: For a more detailed discussion of the change paradigm, see Steven W. Lyle and Robert A. Zawacki, “Centers of Excellence: Empowering People to Manage Change,” Information Systems Management, Winter 1997, pp. 26–29.

EMPOWERMENT OF PEOPLE Empowerment was a new “buzzword” of the 1990s. However, trying to define empowerment is similar to the difficulty of defining pornography. This dilemma was best summed up by former Justice Potter Stewart’s now-famous statement: “I shall not today attempt to define [obscenity]; and perhaps I could never succeed in intelligibly doing so. But I know it when I see it.”

30

Facilitating Transformations in IT: Lessons Learned along the Journey The American Heritage Dictionary defines empower as “to enable or permit.” This is also referred to in the business world as autonomy. One can say that empowerment is basically how much autonomy the employee has in accomplishing organizational goals. So, what does this mean? We can perhaps better define or measure empowerment by observing leadership behavior in IT organizations, and then going back to the values and behaviors in Exhibit 2 to determine how closely they match. Does the organization truly empower its people? Do its actions support the vision, and does the organization then enable its people to get to the vision in their own way? After 30 years of consulting with IT organizations, we do know that individual contributors want to be treated as adults and want to control their own destiny. This does not mean that they desire laissez-faire management. Rather, people want goals and deadlines, and, given that they have the needed ability, training, and resources, they want the autonomy to accomplish those goals. However, they do expect strong feedback on goal accomplishment. Another aspect of empowering people is bureaucracy-bashing. The basic objective of bureaucracy-bashing is to remove low-value work and create “headroom” for overstressed people while building trust. This process of reverse-engineering is similar to GE’s workout sessions or Ford/Jaguar’s egg groups. Many of our STAR organizations also use quick hits and early wins as tactical moves to deliver quickly and reinforce the benefits of organizational transformations.9 WORK CLIMATE AND CULTURE To improve the quality of work life, do not re-invent the wheel. Benchmark against the best and leverage what has already been done. There are many good processes/systems already designed and implemented. For example, a networking group client recently decided that a strong technical career track was one of the keys to the future as organizations delayered. When they called, we referred them to another client that had an excellent technical career track. A team of development people visited the organization, liked what they saw, borrowed the hosting IT organization’s procedure, and implemented the process when they were “back at the ranch.” Very little cost and no consultants. The STAR organization is very flat, with few layers of management. Additionally, it is based on strong project management, and led by people who have a passion for the end product. After 30 years of interventions in IT organizations, however, we find very few organizations with strong project management — and this is an alarming trend. Most companies talk a good game; however, when you look more closely, strong project management is not there. We believe very strongly that this is a key competency of the 31

ACHIEVING STRATEGIC IT ALIGNMENT future in IT. Therefore, it must be examined and reexamined constantly and consistently. MEANINGFUL WORK Strong project managers motivate their IT people through meaningful work. Once the salary and benefits are competitive, our research indicates that IT people want meaningful work, which consists of using a variety of their skills, identifying with the larger goals of the organization, having highly visible work, having autonomy, and receiving good feedback. A related metric that we use is the Job Diagnostic Survey — Information Technology,10 which measures the meaningfulness of work, compares the richness of the jobs to global norms, and also measures the match-up of the person and the job. Meaningful work, equitable pay, and benefits explain more of the productivity of an IT department than any other variables. INCREASED COMMITMENT AND PRODUCTIVITY When organizations look closely at the meaningfulness of the work itself, and then match that with the needs of the person doing the work, they greatly enhance their chances for success. Individuals with a high growth need strength (GNS) should be “matched” in jobs that offer high motivating potential scores (MPS). That is, people with a high need for growth should be put in jobs that offer the growth they need. Conversely, if individuals have a low GNS, they can be placed in the jobs that have lower MPS. If this match is not looked at closely, the people on either end of the scale will quickly become dissatisfied, and their commitment and productivity will then decrease significantly. Our research indicates that 50 to 60 percent of an IT professional’s productivity stems from the match between the person and the job (GNS and MPS). Obviously, organizations want people who are committed and productive in order to increase their overall chances for value added to the bottom line. TRUSTWORTHINESS Organizations also want people who are trustworthy. People with high integrity are more apt to work smarter to make the organization successful. People who are not trustworthy are in a position to cause a great deal of damage to an organization, thus limiting the organization’s chances for success. Many individual contributors want to be trusted (empowered); however, they must also realize that they must be trustworthy. An individual’s trustworthiness is the sum total of his or her behavior, commitment, motivation, and follow-through. Although this variable is more difficult to measure than the others, it is also a key factor in an organization’s success and must be understood and examined. 32

Facilitating Transformations in IT: Lessons Learned along the Journey SUMMARY More than 70 percent of U.S. families now have two or more wage earners. As IT organizations merge, are bought out, or eventually downsize, the remaining people must still do all of the work of the people who left the organization. Not only that, but they must now do the same amount of work in less time. In many IT organizations, leadership’s response to the drivers of change is to have people work longer and harder. People can do this in the short run; however, in many IT organizations, the short run has become the long run. This is a trend that must be altered quickly or there will be severe negative consequences to the company’s success. With a labor market that is becoming increasingly tight for IT workers, and with ever-increasing job stress, the key to a sustained competitive advantage through people is the learning organization. This transformation to the learning organization must begin with the “whys” and “hows.” The key for IT leadership is to tell and show people that there is light at the end of the tunnel. Thus, the transformation journey begins with a vision and ends with reduced job stress. LESSONS LEARNED ALONG THE WAY After 30 years of research, teaching, consulting, and coaching IT organizations through change and transformations, we submit the following lessons learned. While not all IT organizations exhibit all of these characteristics, the trend is so strong and clear that we feel compelled to make these statements. Some may shock the reader. 1. IT cultures eat change for lunch. 2. Empowerment of IT associates is a myth. 3. Mergers and turnover in IT leadership are killing many good change programs. New leaders have a need for their own program or fad of the month. 4. Associates are hearing two conflicting messages: get the product out the door and innovate. Of the two, getting product out the door wins every time. 5. IT leaders talk a good game on measurement but, in reality, many do not want to be measured. 6. Where IT is implementing effective change, there is always a good change champion at the executive level. 7. During the 1990s, the market for IT people shifted from a buyer’s market to a seller’s market (due to hot skills, the year 2000, the Internet, and a drop-off in college majors). Now, there is a shift back to a buyer’s market because of the huge failure of dot.coms and the general downturn in the U.S. economy. However, be alert for the economy to return to its previous high levels of gross domestic 33

ACHIEVING STRATEGIC IT ALIGNMENT product (GDP) and, hence, for the market to again become a seller’s market. 8. IT leaders should concentrate on three main competencies to be successful in a period of random change: passion for the customer, passion for the product, and passion for the people. 9. Many IT change efforts are failing because they are trying to put programs in place in bureaucratic organizations designed for the 1960s, that were really designed for the STAR organization. 10. Turnover at the CIO level and outsourcing will continue in the short term because the business units do not perceive that IT adds timely and cost-effective value to the bottom line. CRITICAL SUCCESS FACTORS FOR WORKFORCE TRANSFORMATION If only a portion of the above statements is true, what can IT leaders do to load their change programs for success? Our conclusions are as follows. 1. Create a vision of the new organization with clearly stated core values and behaviors. 2. Help associates understand the benefits of change (the “whys” and “hows”), because if we do not change, we are all dead. 3. Demonstrate radical change and stay the course. 4. Involve as many associates as possible and listen to them. 5. Realize that repetition never spoils the prayer! Communicate, communicate, and communicate some more. 6. Benchmark with the best. 7. Utilize IT leaders who demonstrate the new values and behaviors. 8. Commit resources to support the change program, in the realization that change is not free! 9. Select future leaders based on the new values and competencies (like paneling, for example, which is a process that uses a committee to evaluate people for future assignments). 10. Monitor progress. Use full-spectrum performance measures before, during, and after the change program. 11. Use multiple interventions and levers. Build on opportunities. 12. Change the performance appraisal and reward system. 13. Put a culture in place that thrives on change. Capitalize on chaos. Notes 1. The Wall Street Journal, “Terror’s Toll on the Economy,” October 9, 2001. 2. Robert A. Zawacki, Carol A. Norman, Paul A. Zawacki, and Paul D. Applegate, Transforming the Mature IT Organization: Reenergizing and Motivating People, EagleStar Publishing, 1995. Also see Wendell L. French, Cecil H. Bell, Jr., and Robert A. Zawacki, Organization Development and Transformation: Managing Effective Change (5th ed.), McGraw-Hill Publishing, 1999. 3. Ibid. pp. 49–50.

34

Facilitating Transformations in IT: Lessons Learned along the Journey 4. The term “IT leaders” is an all-inclusive term that includes people such as CIOs, vice presidents of systems development, directors of networks and operations, and presidents and vice presidents of software organizations. 5. Paul C. Nutt and Robert W. Backoff, “Facilitating Transformational Change,” Journal of Applied Behavioral Science, 33(4), 491, December 1997. 6. For an example of this process see Robert A. Zawacki and Howard Lackow, “Team Building as a Strategy for Time-Based Competition,” Information Systems Management, Summer 1998, pp. 36–39. 7. Zawacki, et al., pp. 26–27. 8. 360 Degree Benchmarking is a trademark of Technology & Business Integrators (TBI) of Woodcliff Lake, New Jersey. 9. For complete guidelines to bureaucracy-bashing, see Figure 2-2 in Zawacki et al., p. 48. 10. The Job Diagnostic Survey — Information Technology and global database is a copyrighted methodology of Zawacki and Associates of Colorado Springs.

35

This page intentionally left blank

Chapter 4

Strategic Information Technology Planning and the Line Manager’s Role Robert Heckman

How can a company gain the benefits of entrepreneurial IT decision making by line managers without permitting the IT environment to become a highcost, low-performance, disconnected collection of independent systems? This chapter proposes an approach to IT planning that includes a formal role and responsibility for line managers. When combined with centralized IT architecture planning, this planning technique creates an approach to information management that is simultaneously top-down and bottom-up. The pendulum is swinging back. For more than a decade the responsibility for managing and deploying information resources has ebbed away from the centralized information management (IM) department and into line departments. The end-user computing revolution of the 1980s was followed by the client/server revolution of the 1990s. In both cases the hopedfor outcome was the location of information resources closer to the customer and closer to marketplace decisions, which in turn would lead to better customer service, reduced cycle time, and greater empowerment of users. The reality, however, was often quite different. Costs for information technology spiraled out of control, as up to half the money a company spent on information technology was hidden in line managers’ budgets. In addition to higher costs, distributed architectures often resulted in information systems with poor performance and low reliability. Because the disciplines that had been developed for centralized mainframe systems were lacking, experienced technologists were not surprised when client/server 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

37

ACHIEVING STRATEGIC IT ALIGNMENT systems performed poorly. Many client/server systems lacked (and still lack) effective backup and recovery procedures, capacity planning procedures, or performance analysis metrics. With costs up and performance down, CEOs are once again calling for greater centralized control over information resources. The growing movement toward enterprise resource planning (ERP) systems such as those offered by SAP, PeopleSoft, and Baan has also increased awareness of the need for careful management of the IT infrastructure. The architectures of the client/server versions of these systems paradoxically create a need for stronger centralized control of the IT infrastructure. Large companies such as Kodak (SAP) and Corning (PeopleSoft) have created single infrastructure development teams with integrated responsibilities for technical architecture, database administration, site assessment, and planning. Finally, the diffusion of Internet and intranet resources has suggested to many that a more centralized approach to control of network resources is also desirable — in fact, even necessary. Recent discussions about the network computer, one that obtains virtually all application and data resources from a central node, have reminded more than one observer of the IBM 3270 “dumb terminal” era. THE IT MANAGEMENT CHALLENGE Despite these drivers toward re-centralization, the forces that originally led to diffusion of IT management responsibility still exist. Certainly, the impact of information technology continues to grow and at the same time becomes more widely diffused throughout organizations. Likewise, the need to respond quickly to competitive thrusts continues to increase the value of independent IT decision making by line managers. As technologies become more user-friendly and the workforce becomes more IT literate, it is inevitable that line managers will face more and more technology-related decisions. The challenge, then, is how to gain the benefits of entrepreneurial IT decision making by line managers without permitting the IT environment to become a high-cost, low-performance, fragmented, and disconnected collection of independent systems. One solution to the IT management challenge is better IT planning. Information systems planning is an idea that has been with us for some time, and numerous systems planning methodologies have been developed and published. However, most IT planning methodologies are based on a topdown, centralized approach and are motivated more by technology issues than by business issues. They tend to be driven or facilitated by technologists within the centralized IM organization, or by outside consultants 38

Strategic Information Technology Planning and the Line Manager’s Role engaged by IM. Ownership of the process and the responsibility for its success are vested in the IM analyst’s role. Top-down, centralized planning conducted by the IM department has an important, even critical role, especially in large organizations. The construction of a single, standardized IT architecture and infrastructure is a crucial step for the successful integration of systems throughout the organization. It provides the foundation upon which aligned business and technology strategies can be built. The development and management of the infrastructure is clearly a centralized IM responsibility. However, it only solves half of the IT management and planning problem. Top-down, centralized IT planning is unlikely to result in a portfolio of IT investments that effectively uses the infrastructure to achieve business objectives. A DIALECTICAL APPROACH A more comprehensive view of IT planning is needed to address the simultaneous needs for centralized coordination and diffused decision making. The first step is to recognize that such planning will necessarily be dialectical — that is, it will involve conflict. To say that a process is dialectical implies tension or opposition between two interacting forces. A dialectical planning process systematically juxtaposes contradictory ideas and seeks to resolve the conflict between them. This expanded view of planning is based on the idea that effective planning can be neither exclusively top-down nor exclusively bottom-up. It must be both. The key to success using this planning philosophy is the creation of a formal role for line managers in the IT planning process. The topdown/bottom-up IT planning approach shown in Exhibit 1 is built on three fundamental principles: 1. Push responsibility for IT planning down and out into the organization. The ability to manage and plan for information resources must be a normal and expected skill for every line manager, equal in importance to the management of human and financial resources. 2. Integrate the IT planning activities of line managers through the role of a chief information officer (CIO). By emphasizing the benefits of entrepreneurial IT decision making by the line manager responsible for business strategy, organizations run the risk of the IT environment becoming fragmented and unresponsive. The CIO, as leader of the information management department, must be responsible for integration and control of IM throughout the organization. 3. View the IT environment as an information market economy. Line managers are free to acquire resources from the information market as they choose. However, just as the federal government regulates activities in the national economy through guidelines, policies, and 39

ACHIEVING STRATEGIC IT ALIGNMENT

Exhibit 1.

Responsibilities in a Dialectical Planning Process

standards, the CIO establishes the information infrastructure within which line managers make information market decisions.1 This emphasis on departmental strategy as opposed to corporate strategy is intentional. It does not deny the critical importance of unified corporate-level business and IT strategies. Rather, it acknowledges that there are often departmental strategies that are not identified in corporate strategy or that may, to some degree, conflict with corporate strategy. A topdown/bottom-up planning process recognizes the possibility that corporate-level business and IT strategies may be influenced over time by the strategic choices made in the sub-units. THE LINE MANAGER’S ROLE Since much attention both in literature and in practice has been given to the top-down component of IT planning, procedures for this kind of work are widely understood in the community of technologists. IT planning, however, is likely to be an unfamiliar job for many line managers. The following simplified planning process (shown in Exhibit 2) may provide a useful framework for line managers to follow when beginning departmental IT planning. Unlike many detailed processes which are more suitable for projectlevel planning, this streamlined approach is valuable because it ensures that line managers focus their attention at the strategic and tactical levels 40

Strategic Information Technology Planning and the Line Manager’s Role

Exhibit 2.

An IT Planning Process for Line Management

rather than at the detailed project level. The process is also highly flexible and adaptable. Within each of the three stages any number of techniques may be adopted and combined to create a customized process that is comfortable for each organizational culture. Stage 1: Strategic Alignment The overall objective of Stage 1 is to ensure alignment between business and technology strategies. It contains two basic tasks: developing an understanding of the current technology situation and creating a motivating vision statement describing a desired future state. In addition to understanding and documenting the current business and technology contexts, this stage has the goal of generating enthusiasm and support from senior management and generating commitment, buy-in, and appropriate expectation levels in all stakeholders. One technique for creating a rich description of the current business and technology situation is the BASEline analysis. The four steps in the BASEline analysis procedure are shown in Exhibit 3. Additional techniques which can be used in Stage 1 are scenario creation, stakeholder interviews, brainstorming, and nominal group techniques. 41

ACHIEVING STRATEGIC IT ALIGNMENT Exhibit 3.

BASEline Analysis

(YHU\SODQQLQJSURFHVVVKRXOGEHJLQZLWKDFOHDUXQGHUVWDQGLQJRIWKHFXUUHQWVLWXDWLRQ7KH SXUSRVHRID%$6(OLQHDQDO\VLVLVWRGHÀQHWKHFXUUHQWVWDWHLQDV\VWHPDWLFZD\7RLQVXUH FRPSUHKHQVLYHQHVVLWGUDZVRQPXOWLSOHVRXUFHVRILQIRUPDWLRQ:KLOHLWLVWUXHWKDWWKHSUR FHVVRILQWHOOLJHQFHJDWKHULQJVKRXOGEHRQJRLQJSURDFWLYHDQGV\VWHPDWLFWKHIRUPDOSODQ QLQJH[HUFLVHSURYLGHVDQRSSRUWXQLW\WRUHYLHZDQGUHÁHFWRQLQIRUPDWLRQDOUHDG\FRPSLOHG,Q DGGLWLRQJDSVLQWKHFXUUHQWNQRZOHGJHEDVHFDQEHLGHQWLÀHGDQGÀOOHG 7KH%$6(OLQHDQDO\VLVSURFHGXUHH[SORUHVWKHFXUUHQWVWDWHLQWHUPVRIIRXUGLPHQVLRQV %XVLQHVV6WUDWHJ\,IWKH,7VWUDWHJ\LVWREHLQDOLJQPHQWZLWKWKHEXVLQHVVVWUDWHJ\LWLVFUXFLDO WKHEXVLQHVVVWUDWHJ\EHFOHDUO\DUWLFXODWHGDQGXQGHUVWRRGE\DOOPHPEHUVRIWKHSODQQLQJ WHDP $VVHWV ,IWKH,7VWUDWHJ\LVWREHLPSOHPHQWDEOHLWPXVWEHUHDOLVWLF7KDWLVLWPXVWEHEDVHG RQDQREMHFWLYHDVVHVVPHQWRIWKHDVVHWVFXUUHQWO\DYDLODEOHRUUHDOLVWLFDOO\REWDLQDEOHE\WKH RUJDQL]DWLRQ,QFOXGHGLQWKLVDQDO\VLVDUHWDQJLEOHDVVHWVVXFKDVKDUGZDUHDQGVRIWZDUHGD WDEDVHVFDSLWDODQGSHRSOH,QDGGLWLRQLQYLVLEOHDVVHWVVXFKDVPDQDJHPHQWVNLOOVWHFKQLFDO VNLOOVSURSULHWDU\DSSOLFDWLRQVFRUHFRPSHWHQFLHVPDUNHWSODFHSRVLWLRQDQGFXVWRPHUOR\DOW\ DOVRSURYLGHDIRXQGDWLRQIRUIXWXUHVWUDWHJLFPRYHV 6\VWHP6WUDWHJ\7RDYRLGIUDJPHQWDWLRQGXSOLFDWLRQDQGLQFRPSDWLELOLW\DGHSDUWPHQWDO,7 SODQPXVWUHFRJQL]HWKHRSSRUWXQLWLHVDQGFRQVWUDLQWVSURYLGHGE\FRUSRUDWH,7SROLFLHVDQGLQ IUDVWUXFWXUH7KHGHSDUWPHQWDOSODQQLQJWHDPPXVWEHNQRZOHGJHDEOHDERXWFRUSRUDWHVWDQ GDUGVDQGSODQVWRHIIHFWLYHO\LQWHJUDWHLWVLQLWLDWLYHV (QYLURQPHQWV ([WHUQDOHQYLURQPHQWVRIWHQFUHDWHLPSRUWDQWFRQVWUDLQWVDQGRSSRUWXQLWLHVIRU ,7SODQQHUV5HOHYDQWVWUDWHJLFSODQQLQJDVVXPSWLRQVDERXWIXWXUHWHFKQRORJLFDOUHJXODWRU\ HFRQRPLFDQGVRFLDOHQYLURQPHQWVPXVWEHEURXJKWWRWKHVXUIDFHDQGDJUHHGXSRQE\WKH SODQQLQJWHDP

Stage 2: Create an IT Investment Portfolio In this stage the objective is to identify a rich set of options for future information technology investments. In addition to generating potential investment options, in this stage it is also important to understand which options will have the greatest impact on the business, to assess the risk associated with each option, and to estimate the resources required to implement the impact options. Techniques that may be used in this stage are the strategic option generator,2 value chain analysis,3 critical success factor analysis,4 brainstorming, and nominal group techniques. It may also be useful in this stage to systematize the evaluation process through the use of some form of scoring model. A scoring model enables the planning team to integrate considerations such as financial benefit, strategic impact, and risk. Stage 3: Tactical Bridge In this stage the line manager takes the crucial actions necessary to ensure that the strategic IT investment portfolio is actually implemented. To overcome the greatest single threat to strategic technology planning — a plan 42

Strategic Information Technology Planning and the Line Manager’s Role that is put on the shelf and never looked at again — it is important to ensure that resources are made available to implement the projects that comprise the investment portfolio. To do this, it is necessary to integrate the work accomplished in strategic planning with the ongoing, periodic tactical planning that occurs in most organizations. The most important tactical planning activities are often financial. It is imperative that money be allocated in the annual budgeting cycle to execute the strategic projects identified in the IT investment portfolio. While this may seem obvious, companies often fail to make this link. It is assumed that the operating budget will automatically take into account the strategic work done six months earlier, but in the political process of budget allocation, the ideas in the strategic plan can easily be forgotten. Human resources are also often taken for granted. However, careful tactical planning is usually necessary to ensure that the right blend of skills will be available to implement the projects called for in the strategic plan. Once appropriate resources (time, money, people) have been allocated, then intermediate milestones and criteria for evaluation should be developed. Finally, effective communication of the strategic and tactical work that has been done is a crucial step. Dissemination of the planning work through management and staff presentations and publications will ensure that organizational learning occurs. Thus, attention should be devoted in Stage 3 not only to ensure that strategic plans are implementable, but that they continue to affect the organization’s strategic thinking in the future. PLANNING PROCEDURES When beginning the process of IT planning for the first time, a number of basic procedural issues will have to be addressed and resolved. Who is the line manager responsible for developing an IT plan? Who should be involved in the IT planning process? How formal should the process be? What deliverables should be produced? What is an appropriate planning horizon? What is the right planning cycle? There is no one right answer to these questions. The culture and the leadership style of the company and the department will to a great degree influence how planning processes are executed. There are, however, several procedural guidelines that may be useful for line managers who are undertaking the task of IT planning: Who Is the Line Manager? The key role in this process is played by the department or business unit manager. In smaller companies, no more than two or three senior executives may play the line manager role as described here. Larger corporations may have as many as 20 or 30 business units with a scope that war43

ACHIEVING STRATEGIC IT ALIGNMENT rants independent IT planning. Regardless of who occupies the role of line manager, it is absolutely critical that this individual take an active interest in IT planning and be personally involved in the process. He or she is the only one who can ensure that directly reporting managers view planning for information resources as an integral part of their job accountability. Who Should Be Involved? Composition of a planning team is a delicate art. We may think of representation on the planning team both horizontally and vertically. Attention to horizontal representation ensures that all sub-units are represented in the planning process. Attention to vertical representation ensures that employees at all levels of the organization have the opportunity to provide input to the planning process. It is also critical that departmental IT planning has a link to corporate IT planning. Thus it is usually beneficial to include a member of the central IM staff on the departmental planning team. Other outside members may also be appropriate, especially those who can provide needed technical expertise in areas such as emerging technologies where the line management team may not have the necessary technical expertise. Process and Deliverables As the business environment becomes more dynamic and volatile, the technology planning process must be more flexible and responsive. Thus the planning process should not be too rigid or too formal. It should provide the opportunity for numerous face-to-face encounters between the important participants. Structure for the process should provide welldefined forums for interaction rather than a rigidly specified set of planning documents. Perhaps a more effective mechanism for delivering the work of planning teams is for the line manager to periodically present the departmental IT plan to other senior managers, the CIO, and to members of his own staff. Planning Horizon The planning horizon must also be determined with the dynamic nature of the information technology environment in mind. Although there are exceptions, it is usually unrealistic for a department manager to plan with any precision beyond two years. Corporate IT planning, on the other hand, must look further when considering the corporate system’s infrastructure and policies. This long-term IT direction must be well understood by departmental managers, for it is critical to line planning activities.

44

Strategic Information Technology Planning and the Line Manager’s Role Planning Cycle: A Continuous Process Plans must be monitored and updated frequently. It may be sufficient to go through a formal planning exercise annually, including all three steps mentioned earlier. Checkpoint sessions, however, may occur at various intervals throughout the year. Major evaluations — such as the purchase of a large software package or the choice of a service provider — are likely to occur at any time in the cycle and should be carefully integrated with planning assumptions. It is absolutely critical that the strategic IT plan be integrated into other strategic and tactical planning processes, such as strategic business planning and annual budgeting for the department. Unless this linkage is formally established, it is very unlikely that strategic IT planning will have much influence on subsequent activities. CONCLUSION Regardless of the procedures chosen, the goal is for all members of the organization to understand that strategic IT planning is a critical component of business success. Everyone should be aware of the decisions reached and the linkage between business strategy and technology strategy. In the future, when all members of line management recognize that strategic technology planning is an essential component of strategic business planning, then an emphasis on strategic IT planning as a stand-alone activity may not be necessary. For now, however, as the pendulum swings back from decentralized to centralized control of information resources, there is a risk that line managers may not recognize the need for strategic IT planning. As we better understand the importance of centralized control of the IT infrastructure, we must not forget that the integration of IT into business strategy remains the province of every line manager who runs a business unit. Notes 1. Boynton, A.C. and Zmud, R., “Information Technology Planning in the 1990s: Directions for Practice and Research,” MIS Quarterly, 11(1), 1987, 59–71. 2. Wiseman, C. 1988. Strategic Information Systems, Irwin Publishing, Toronto, Ontario, Canada. 3. Porter, M. and Millar, V., “How Information Gives You Competitive Advantage,” Harvard Business Review, July 1985. 4. Shank, E. M., Boynton, A. C, and Zmud, R., “Critical Success Factors as a Methodology for MIS Planning,” MIS Quarterly, June 1985, pp. 121–129.

45

This page intentionally left blank

Chapter 5

Running Information Services as a Business Richard M. Kesner

Most enterprises lack a comprehensive process that ensures the synchronization of IT project and service investments with overall business planning and delivery. Indeed, many enterprises fail to clarify and prioritize IT investments based upon a hierarchy of business needs and values. Some enterprises do not insist that their IT projects have a line-of-business sponsor who takes responsibility for the project’s outcomes and who ensures sufficient line-of-business involvement in project delivery. To address these shortcomings, each and every enterprise should embrace a process where the business side of the house drives IT investment, where both business and IT management holistically view and oversee IT project deliverables and service delivery standards, and where ownership and responsibility for said IT projects and services are jointly shared by the business and the IT leadership. The objective of this chapter is to present a framework and set of tools for viewing, communicating, managing, and reporting on the commitments of the IS organization to the greater enterprise.1 Throughout, the uncompromising focus is on the customer and hence on the enterprise’s investment in information technology from the standpoint of customer value. A starting point in this discussion is a simple model of the “internal economy” for information services that in turn drives IS’ allocation of resources between service delivery and project work. This model serves as the foundation for a more detailed consideration of two complementary IS business processes: service delivery management and project commitment management. The chapter then discusses a process for the more effective synchronization, communication, and oversight of IS service and project commitments, including the use of measurement and report tools. Finally, the 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

47

ACHIEVING STRATEGIC IT ALIGNMENT discussion will turn to the benefits of establishing an enterprisewide IS project office to support IS delivery and to better monitor and leverage the value of the enterprise’s IT investment portfolio. THE “INTERNAL ECONOMY” FOR INVESTING IN IT SERVICES AND PROJECTS All organizations are resource constrained. Their leaders must choose where best to invest these limited resources. Although the IS share of the pie has been increasing with the growing use of IT across the enterprise, it too has its limits, requiring planning and prioritization in line with the needs of the greater enterprise. Effectively and efficiently managing IS resources requires understanding of the full scope of the demands driving the prioritization of these IT investments. At the most fundamental level, organizations invest in technology in compliance with mandated legal and accounting requirements, such as those set forth by federal and state taxation authorities, government legislation, regulatory statutes, and the like. At the next level, an enterprise expends resources to maintain its existing base of information technology assets, including hardware and software maintenance; system licenses and upgrades; security services; and desktop, storage, and printer expansions and replacements. These investments are meant to “keep the lights on” and, therefore, are not discretionary; nor are these costs stagnant. They go up with inflation and as new workers are added or as the network and related IT infrastructures grow. Furthermore, as new IT services are introduced to the environment, they become, over time, part of the enterprise’s embedded base of IT, expanding its nondiscretionary IT spending. Because none of these IT products and services run on their own or function flawlessly, IS must also provide significant and cost-effective enduser operations, and production support and troubleshooting. Later, because neither the requirements of IS customers nor the evolution of information technology itself are static, there is a constant need to enhance existing IT products and services and to invest strategically in new IT capabilities. Thus, the day-to-day delivery of an information services organization must balance ongoing services — typically running 24 hours a day/seven days a week (a.k.a. 24/7) — with a wide range of service and system enhancements and new project work. Many times, IS project-delivery resources overlap with those focused on service delivery for the simple reason that the development team must understand the current state of the enterprise’s business requirements and IT capabilities if they are to deliver requested improvements. Furthermore, those maintaining an IT service should have a hand in its creation or, at the very least, thoroughly understand its IT underpinnings. Thus, a balanced IS organization requires a workforce dedicated to 24/7 service delivery overlapping a 48

Running Information Services as a Business Total Cost of IT Ownership: New Projects Discretionary (Governed by Project Plans)

Enhancements IT Investment Reserve

Nondiscretionary (Governed by SLAs)

System Maintenance Infrastructure Maintenance Required by External Agencies

Exhibit 1.

Group IS Expenditures

core group focused on technological innovation, development, and systems integration. Taken together, these various layers of IT investment establish the boundaries of the IS organization’s internal economy. As modeled in Exhibit 1, one can group IS expenditures into two large buckets: nondiscretionary costs that support existing IT investments and discretionary costs that fund new initiatives, including major system enhancements and new IT projects.2 Note that our model comprehends all of the enterprise’s IT expenditures, including internal (IS staff) labor and external vendor, consulting, and contractor costs. The “IT investment reserve” represents an amount set aside each year as a contingency for both discretionary and nondiscretionary cost overruns. Driven by the number of users and the extent of services, nondiscretionary costs will typically consume at least 50 percent of the annual IT budget and, if not carefully managed, may preclude the opportunity for more strategic IT (project-based) investments. Put another way, the total sum devoted to IT expenditure by the enterprise is rarely elastic. If nondiscretionary costs run out of control, there will be little left for project work. If the business’ leadership envisions major new IT investments, these may only come at the expense (if possible!!!) of existing IT services or through enlarging the overall IS allocation.3 Not surprisingly, the enterprise’s leaders usually want it both ways; namely, they expect the IS organization to keep the total cost of IT down while taking on new initiatives. For this reason, it is incumbent upon the IS 49

ACHIEVING STRATEGIC IT ALIGNMENT leadership to manage their commitments with great care through a rigorous process of project prioritization, customer expectation management, and resource alignment. To succeed in this endeavor, IS must keep it simple and keep it collaborative. More specifically, they should employ an investment-funding model along the lines mentioned above. They should separate out and manage recurring (nondiscretionary) activity through service level agreements (SLAs).4 Similarly, they should manage projects through a separate but connected commitment synchronization process.5 Throughout these labors, IS management should employ metrics that measure value to the business and not merely the activity of IS personnel. Last but not least, while IS management should take ownership of the actual technology solutions, they must also ensure that the proper business sponsors, typically line-of-business executive management, take ownership of and responsibility for project delivery and its associated business process changes in partnership with their IS counterparts. The next two sections of this chapter will, in turn, consider in greater detail best practices in the areas of service and project delivery management. MANAGING SERVICE DELIVERY6 The services delivered by IS to its customers across the enterprise have evolved over time and are in a constant state of flux as both the business needs of the organization and its underlying enabling technologies evolve. Given this ever-changing environment and given the general inadequacies of the typical lines of communication between IS teams and their customers, much of what is expected from IS is left unsaid and assumed. This is a dangerous place to be, inevitably leading to misunderstandings and strained relations all around. The whole point of service level management is for IS to clearly and proactively identify customer requirements, define IS services in light of those requirements, and articulate performance metrics (a.k.a. service levels) governing service delivery. Then, IS should regularly measure and report on IS performance, hence reinforcing the value proposition of IS to its customers. In taking these steps, IS management will provide its customers with a comprehensive understanding of the ongoing services delivered to them by the IS organization. Furthermore, service level management establishes a routine for the capture of new service requirements, for the measurement and assessment of current service delivery, and for alerting the customer to emerging IT-enabled business opportunities. In so doing, IS service delivery management will ensure both that IS resources are focused on delivering the highest value to the customer and that the customer 50

Running Information Services as a Business appreciates the benefits of the products and services so delivered. The guiding principles behind such a process may be summarized as follows: • Comprehensive. The process must encompass all business relationships, and products and services delivery by IS on behalf of its customers. • Rational. The process should follow widely accepted standards of business and professional best practice, including standard system development life-cycle methodologies. • Easily understood. The process needs to be streamlined, uncomplicated, and simple, hence easily accessible to nontechnical participants in the process. • Fair. Through this process the customers will understand that they pay for the actual product or service as delivered; cost and service level standards should be benchmarked and then measured against other best-in-class providers. • Easily maintained. The process should be rationalized and largely paperless, modeled each year on prior year actuals and subsequently adjusted to reflect changes in the business environment. • Auditable. To win overall customer acceptance of the process, key measures must be in place and routinely employed to assess the quality of IS products, services, and processes. The components of the IS service delivery management process include the comprehensive mapping of all IS services against the enterprise communities that consume those services. It also includes service standards and performance metrics (including an explicit process for problem resolution), the establishment and assignment of IS customer relationship executives (CREs) to manage individual customer group relations, a formal service level agreement for each constituency, and a process for measuring and reporting on service delivery. Let us consider each of these in turn. As a first step in engineering an IS service-level management process, IS management must segment its customer base and conceptually align IS services by customer. If the IS organization already works in a business environment where its services are billed out to recover costs, this task can be easily accomplished. Indeed, in all likelihood, such an organization already has SLAs in place for each of its customer constituencies. But for most enterprises, the IS organization has grown up along with the rest of the business and without any formal contractual structure between those providing services and those being served.7 To begin, employ your enterprise’s organization chart and map IS delivery against that structure. As you do so, ask yourselves the following questions: 51

ACHIEVING STRATEGIC IT ALIGNMENT • What IS services apply to the entire enterprise and who sponsors (i.e., pays for or owns the outcome of) these services? • What IS services apply only to particular business units or departments and who sponsors (i.e., pays for or owns the outcome of) these services? • Who are the business unit liaisons with IS concerning these services and who are their IS counterparts? • How does the business unit or IS measure successful delivery of the services in question? How is customer satisfaction measured? • How does IS report on its results to its customers? • What IS services does IS itself sponsor on its own initiative without any ownership by the business side of the house? Obviously, the responses to these questions will vary greatly from one organization to another and may in fact vary within an organization, depending upon the nature and history of working relationships between IS and the constituencies it serves. Nevertheless, it should be possible to assign every service IS performs to a particular customer group or groups, even if that “group” is the enterprise as a whole. Identifying an appropriate sponsor may be more difficult, but in general, the most senior executive who funds the service or who is held accountable for the underlying business enabled by that service is its sponsor. If too many services are “owned” by your own senior IS executive rather than a business leader, IS may have a more fundamental alignment problem. Think broadly when making your categorizations. If a service has value to the customer, some customers must own it. Service Level Agreements In concluding this piece of analysis, the IS team will have identified and assigned all of its services (nondiscretionary work) to discrete stakeholder constituencies. This body of information may now serve as the basis for creating so-called service level agreements (SLAs) for each customer group. The purpose of the SLA is to identify, in terms that the customer will appreciate, the value that IS brings to that group. However, the purpose of the SLA goes well beyond a listing of services. First and foremost, it is a tool for communicating vital information to key constituents on how they can most effectively interact with the IS organization. Typically, the document includes contact names, phone numbers, and e-mail addresses. SLAs also help shape customer expectations in two different but important ways. On the one hand, they identify customer responsibilities in dealing with IS. For example, they may spell out the right way to call in a problem ticket or a request for a system enhancement. On the other hand, they define IS performance metrics for the resolution of problems and for responding to customer inquiries. Last but not least, a standard SLA com52

Running Information Services as a Business piles all the services and service levels that IS has committed to deliver to that particular customer. Service level agreements can take on any number of forms.8 Whatever form you choose, ensure that it is as simple and brief a document as possible. Avoid technical jargon and legalese language, and be sensitive to the standard business practices of the greater enterprise within which your IS organization operates.9 Most of all, write your SLAs from your customers’ perspective, focusing on what is important to them. Tell them in plain English what services they receive from you, the performance metrics for which IS is accountable, and what to do when things break down or go wrong. Within these more general guidelines, IS SLAs should include the following elements: • A simple definition of the document’s purpose, function and scope • The name(s) and contact information of those parties within IS who are responsible for this particular document and the associated business relationship (typically the assigned IS customer relationship executive and one or more IS business officers) • A brief set of statements identifying the various units within IS, their roles and responsibilities, and how best to contact them for additional information, support, and problem resolution10 • A table listing the particular information technology assets and IS services addressed in the SLA, including hours of operation and support for listed systems and services • Any exclusion statements, such as “significant system enhancements of over $10,000 in value and larger IS projects will be covered through separate agreements between the XYZ Department and Information Services” • If appropriate, a breakdown of service costs and their formulas, if these costs are variable, as well as the projected total cost of the services delivered for the fiscal year of the SLA • Business unit responsibilities11 • Service level response standards when problems arise (see Exhibit 2 for an example) • Escalation procedures for the handoff of problems as need be (see Exhibit 3 for an example • A glossary of key terms, especially defining system maintenance and enhancement activities and the roles and responsibilities of service delivery process participants • Service metrics12 and reporting standards • A sign-off page for the executive sponsor13 and the working clients who are in receipt of the SLA prepared by IS Your next step is to assign a customer relationship executive (CRE) to each SLA “account.” The role of the CRE is to serve as a primary point of 53

ACHIEVING STRATEGIC IT ALIGNMENT Exhibit 2.

Service Level Response Standards

Severity

Description

Response Time

Critical

Application does not function for multiple customers Application function does not work for single customer Application questions

1 business day

High

Low

2 business days

3 business days

contact between customer executive management and the IS organization. In this role, the CRE will meet with his/her executive sponsor and working clients to initially review that business unit’s SLA and thereafter on a regular basis to return to that group to assess IS performance against the metrics identified in the SLA. Where IS delivers a body of services that apply across the enterprise, you might consider creating a single “community” SLA that applies to all and then brief addenda that list the unique systems and services that pertain to particular customer groups. Whatever the formal structure of these documents, the real benefit of the process comes from the meetings where the CRE will have an opportunity to reinforce the value of IS to the customer, listen to and help address IS delivery and performance problems, learn of emerging customer requirements, and share ideas concerning opportunities for further customer/IS collaboration. CREs will act within IS as the advocates and liaisons for, and as the accountable executive partners to, their assigned business units in strategic matters. Needless to say, CREs must be chosen with care. They must be good listeners and communicators. They must have a comprehensive understanding of what IS currently delivers and what information technology may afford the customer in question. While they need not be experts in the aspect of the business conducted by the customer, they must at the very least have a working knowledge of that business, its nomenclature(s), and the roles and responsibilities of those working in that operating unit. Among the many skills that a good CRE must possess is the ability to translate business problems into technical requirements and technical solutions into easily understood narratives that the customer can appreciate. The CRE also needs to be a negotiator, helping the customers manage their portfolios of IS services and projects within the available pool of resources, choose among options, and at times defer work to a better time. The greatest value of the CRE is to act as a human link to a key customer constituency, managing their expectations while keeping IS focused on the 54

Running Information Services as a Business Exhibit 3.

Escalation Procedures for the Handoff of Problems

3ULRULW\  Definition $SSOLFDWLRQ LV XQDYDLODEOH WR DQ\RQH LQ WKH HQWHUSULVH Response time:RUN ZLOO EHJLQ LPPHGLDWHO\ DQG FRQWLQXH XQWLO UHVROYHG 5HVSRQVLELOLWLHV ‡ ‡ ‡

IS service provider — UHVROYHVSUREOHPDQGFRPPXQLFDWHVWRDOOZKRDUHDIIHFWHG DW OHDVW GDLO\ XQWLO UHVROYHG Working client D³ ZRUNV DORQJVLGH &5( XQWLO WKH PDWWHU LV UHVROYHG Partner-providers E³ RWKHU ,6 WHDPV DQG H[WHUQDO YHQGRUV ZLOO SURYLGH WHFKQLFDO DVVLVWDQFH

3ULRULW\  Definition $SSOLFDWLRQ LV QRW DYDLODEOH IRU LQGLYLGXDO XVHUV ZLWKLQ D VLWH Response time $ UHVSRQVH ZLOO EH SURYLGHG ZLWKLQ RQH EXVLQHVV GD\ $ UHFRP PHQGHG VROXWLRQ ZLOO EH SURYLGHG ZLWKLQ WKUHH EXVLQHVV GD\V LI WKHUH DUH no RXW VWDQGLQJ SULRULW\ V )LQGLQJ D VROXWLRQ WR D SULRULW\  SUREOHP ZLOO QRW EHJLQ XQWLO DOO SULRULW\  SUREOHPV WKDW LPSDFW WKH SULRULW\  LVVXH·V UHVROXWLRQ KDYH EHHQ UHVROYHG 5HVSRQVLELOLWLHV ‡ IS service provider — VHQGV DFNQRZOHGJPHQW RI SUREOHP UHVROYHV SUREOHP DQG FRPPXQLFDWH VWDWXV WR DOO ZKR DUH DIIHFWHG ‡ Working client — ZRUNV DORQJVLGH &5( XQWLO WKH PDWWHU LV UHVROYHG ‡ Partner-providers — HPSOR\HG DV QHHG EH 3ULRULW\  Definition $SSOLFDWLRQ JHQHUDWHV DSSURSULDWH UHVXOWV EXW GRHV QRW RSHUDWH RSWLPDOO\ Response time ,PSURYHPHQWV ZLOO EH DGGUHVVHG DV SDUW RI WKH QH[W VFKHGXOHG UHOHDVH RI V\VWHP 5HVSRQVLELOLWLHV ‡ IS service provider — FRPPXQLFDWHV QHHGHG FKDQJHV ‡ Other process participants — DV SDUW RI WKH UHJXODU V\VWHP XSJUDGH F\FOH a

The executive sponsor rarely gets involved in the day-to-day collaboration with IS. The working clients are those representatives of the business unit served who work with IS on a regular basis and who have the authority to speak for the business unit in terms of identify service requirements or in changing project priorities. b While the party directly responsible for a service (e.g., e-mail, help desk) should deal directly with the customer concerning problem resolution, most customer services entail a value chain of technology services. The IS owners of the latter services are partnerproviders to the former (e.g., network and server services are partner-providers to a Web application owner).

quality delivery of its commitments to that group. This effort is iterative (see Exhibit 4): collecting and processing data, meeting with customers and IS service providers, listening, communicating, and educating. In total, the service level management process will ensure the proper alignment between customer needs and expectations on the one hand, and IS resources on the other. The process clearly defines roles and responsibilities, leaves little unsaid, and keeps the doors of communication and understanding open on both sides of IS service delivery. From the standpoint of the IS leadership, the SLA process offers the added benefit of maintaining a current listing of IS service commitments, thus filling in the non55

ACHIEVING STRATEGIC IT ALIGNMENT

1. Define the SLA

6. Refine the SLA

2. Assign the SLA Owner

5. Improve the Service Provided

3. Monitor SLA Compliance

4. Collect and Analyze Data

Exhibit 4. SLA Management Process

discretionary layers of IS’ internal economy model. Whatever resources remain can be devoted to project work and applied research into new ITenabled business opportunities. Bear in mind that this is a dynamic model. As the base of embedded IT services grows, a greater portion of IS resources will fall within the sphere of nondiscretionary activity, limiting project work. The only way to break free from this set of circumstances is to either curtail existing services or broaden the overall base of IS resources. In any event, the service-level management process will provide most of the information that business and IS leaders need to make informed decisions. With the service side of the house clarified, IS leaders can turn their attention to the discretionary side of delivery. Project work calls for a complementary commitment management process of its own. MANAGING PROJECT COMMITMENTS To put it simply, any IS activity that is not covered through a service level agreement is by definition a project that must be assigned IS discretionary resources. Enterprises employ a planning process of some type whereby IT projects are identified and prioritized. IS is then asked to proceed with this list in line with available resources. Unfortunately, IS organizations find it much easier to manage and deliver routine, ongoing (SLA) services than to execute projects. The underlying reasons for this state of affairs may not be obvious, but they are easily sum56

Running Information Services as a Business marized. Services are predictable events, which are easily metered and with which IS personnel and their customers have considerable experience and a reasonably firm set of expectations. More often than not, a single IS team oversees day-to-day service delivery (e.g., network operations, Internet services, e-mail services, security administration, and so forth). Projects, on the other hand, typically explore new territory and require an IS team to work on an emerging, dynamic, and not necessarily well-articulated set of customer requirements. Furthermore, most projects are, by definition, cross-functional, calling on expertise from across the IS organization and that of its business unit customers. Where there are many hands involved and where the project definition remains unclear, the risk of error, scrap, and rework is sure to follow. These are the risks that the project commitment management process must mitigate. Note that, like the SLA process, the effort and rigor of managing project commitments will vary from one organization to another and from one project to the next. The pages that follow present a framework for informed decision making by the enterprise’s business and IS leadership as they define, prioritize, shape, and deliver IT projects. Readers must appreciate the need to balance their desire to pursue best practices with the realworld needs of delivery within their own business environment. As a first step, the enterprise’s business leadership will work with IS to identify appropriate project work. Any efforts that appropriately fall under existing SLAs should be addressed through the resources already allocated as part of nondiscretionary IS funding for that work. Next, CREs will work with their executive sponsor(s) to define and shape potential project assignments for the coming year. While the CREs will assist in formulating and prioritizing these project lists, they must make it clear that this datagathering activity in no way commits IS. Instead, the CREs will bring these requests back to IS executive management who will in turn rationalize these requests into an IT project portfolio for the review and approval of the enterprise’s leadership.14 This portfolio presentation should indicate synergies and dependencies between projects, the relative merits/benefits of each proposal, and the approximately level of investment required. With this information in hand and as part of the annual budgeting/planning process, the enterprise’s business and IS leaderships will meet to prioritize the list and to commit, in principle, to those projects that have survived this initial review. Typically, all enterprise-level projects emerging from and funded by this process are, by definition, of the highest priority in terms of delivery and resource allocations. If additional resources are available, business-unit-specific projects may then be considered in terms of their relative value to the enterprise. At least in the for-profit sector, enterprises will define a return-on-investment (ROI) hurdle rate for this part of the process, balancing line-of-busi57

ACHIEVING STRATEGIC IT ALIGNMENT ness IT needs against overall enterprise IT needs. In many instances, the business units may receive approval to proceed with their own IT projects as long as they can fund these projects and as long as IS has the bandwidth to handle the additional work. Invariably, unforeseen circumstances and business opportunities will necessitate revisiting the priority list. Some projects may be deferred and others dropped in favor of more pressing or promising IT investments. Similarly, as the IS team and its business partners work through the development life cycle on particular projects, they will find that their original assumptions are no longer valid, requiring the resizing, rescheduling, redefinition, or elimination of these projects. The key to success here is the employment of an initial, rigorous project-scoping effort coupled with a comprehensive project life-cycle management process that ensures regular decision points early on in the project’s design, development, and implementation phases. Once a project is properly scoped and enters the pipeline, the IS project director,15 working in collaboration with working client(s) and supported by an IS project manager,16 will create a commitment document and a project plan (both are discussed below) reflecting detailed project commitments and resource allocations.17 The IS CRE will then monitor the project team’s overall compliance within the plan, reporting back to the customer on a regular basis. Initial project scoping is key to the subsequent steps in the project management process. Too often, projects are pursued without a clear understanding of the associated risks and resource commitments. Neither the project’s working clients nor its IS participants may understand their respective roles and responsibilities. Operating assumptions are left undocumented and the handoffs and dependencies among players remain unclear. More often than not, IT efforts undertaken without sufficient information along these lines end in severe disappointment. To avoid such unhappy results, IS project teams should embrace a commitment process that ensures a well-informed basis for action. The Commitment Management Process A framework for commitment management follows. Like the other illustrations found in this chapter, this methodology’s application should be balanced against the needs of the occasion. For example, if the project in question covers well-trodden ground, less rigor is required than if the envisioned project blazes hitherto unexplored trails. Here again, the commitment management process itself forces the project team to ask the very questions that will help them to determine the best course of action. From the outset, no project should proceed without an executive (business) sponsor and the assignment of at least one working client. The exec58

Running Information Services as a Business utive sponsor’s role is to ensure the financial and political support to see the project through. The sponsor owns the result and is therefore the project’s most senior advocate. The sponsor’s designated working clients are those folks from the business side of the house who will work hand-inhand with IS to ensure satisfactory delivery of the project. Without this level of commitment on the part of the business, no project should proceed. If the project in question happens to be sponsored by IS itself, then the chief IS executive will serve as sponsor, and the IS manager who will own the system or service once it is in production will serve as the working client. While it is assumed that the project is funded, the commitment document should indicate the project’s recognized priority. For example, is this an enterprise project of the highest priority or a line-of-business project of only middling importance? Finally, the team must ask, at what phase in the scoping of the project are we? Do we know so little about the project at hand that we are only in a speculative phase of commitment at this time, or are we so confident in our understanding of the project’s parameters that we are prepared to make a formal commitment to the customer and proceed?18 As a next step in framing the commitment, the project team should define the business problem or opportunity driving the proposed investment of IS resources. The reader may think this a trivial activity but you would be surprised at how disparate the initial conversation on this subject may become. It is essential that the team start from a common base of understanding concerning the project’s rationale and purpose. To that same end, project teams should be walked through a value template similar to Exhibit 5, so that everyone involved can appreciate the benefits of a positive project outcome. With a common view of the overall project vision and value in place, the time has come to detail project deliverables, including those that are essential for customer acceptance, those that are highly desirable if time and resources allow, those that are optional (where the project may be acceptably delivered without these components), and those elements that are excluded from the scope of this project (but that may appear in future, separately funded phases of the project). Given the project’s now agreedupon deliverables, the team should assign critical success factors for customer satisfaction based on the following vectors of measurement: scope, time, quality, and cost. These metrics must be defined in terms of the particular project. For example, if a project must be completed by a certain date (e.g., to comply with a new regulation), “time” rises to the top of the list, meaning that if time grows short, the enterprise will either adjust scope, sacrifice 59

ACHIEVING STRATEGIC IT ALIGNMENT Exhibit 5. Value Template

Business Improvement

Major

Minor

None

Business Value Statement (in support of the improvement)

1. Increase revenue 2. Decrease cost 3. Avoid cost 4. Increase productivity 5. Improve time-to-market 6. Improve customer service/value 7. Provide competitive advantage 8. Reduce risk 9. Improve quality 10. Other (describe)

quality, or add to cost to meet the desired date. Similarly, if the scope of a project is paramount, perhaps its delivery date will be moved out to allow the team to meet that commitment. As with many other aspects of the commitment process framework, the importance of these elements is to ensure that a thoughtful discussion ensues and that issues are dealt with proactively rather than in a time of crisis. Obviously, the discussion of these critical success factors must take place with the working client(s), creating a golden opportunity to set and manage customer expectations. Because no major change to an IT environment is without implications, the commitment process must identify any major impacts to other systems and IS services that will result from the implementation of the envisioned project solution. For example, if a new application requires network infrastructure or desktop platform upgrades, these must be noted in the commitment document and their implications carried over more tangibly into the project plan. Similarly, if a new information system requires the recoding of older systems or data extracts from enterprise systems of record, these impacts must be documented and factored into the project plan. It is noteworthy to mention that what often gets a project team in trouble is not what is documented but what goes unsaid. For this reason, the 60

Running Information Services as a Business Exhibit 6.

Risk Management Matrix

Potential Risk

Description of Risk

Resolution

Technology Financial Security Data integrity Continuity Regulatory Business requirements Operational readiness Other (explain)

commitment process should require the team to explore project assumptions, constraints, and open issues. It falls to the project’s director or manager to draw out from the team and make explicit the inferred operating principles of the project, including the roles and responsibilities of project participants (especially internal and external IT partner providers), how project delivery processes should work, what tools and technologies are to be employed, and how key business and technical decisions governing project outcomes will be made. All projects operate under constraints such as the availability of named technical specialists or the timely arrival of computer hardware and software, which may have a direct impact on outcomes but are out of the team’s direct control. These, too, need to be made explicit so that the customer appreciates the risks to the project associated with these issues. Open issues are different from constraints in that these elements can and will be addressed by the team but the fact that they are “open” may adversely impact delivery. The project team should maintain their list of assumptions, constraints, and open items so as to ensure that none of these diminish project outcomes. At the very least, their status should be shared with the customer on a regular basis as part of expectation setting and subsequent project reporting. The two remaining components of the commitment process are (1) those elements that capture the exposure from project risks and (2) those elements that itemize the project’s specific resource commitments. In terms of the former, it is perhaps useful to begin with an illustrative risk management matrix, as shown in Exhibit 6. 61

ACHIEVING STRATEGIC IT ALIGNMENT In completing a commitment document, the project team should identify the major risks faced in pursuing their assignment. The aforementioned Exhibit 6 identifies risk categories and provides room for a more detailed description of a particular risk and its mitigation. For example, a project technology risk might entail introducing a new or untried technology into the enterprise’s IT environment. A way to mitigate that risk would be to involve the vendor or some other experienced external partner-provider in the initial installation and support of the technology. If the envisioned project solution requires clean data to succeed, the project plan could include a data cleanup process. If business requirements are not documented, phase one of the project could call for business analysis and process engineering work to get at those requirements. The team needs to be honest with itself and its customer in defining project risks and in dealing with them. Keeping risks in the commitment document ensures that they are not forgotten. To conclude the commitment process, the team must define its resource needs in terms of people, time, and funding. From the standpoint of people, the commitment document needs to name names and define roles and responsibilities (including the skills required) explicitly. Exhibit 7 contains an illustrative list of project roles. The project director must ensure that a real person who understands and agrees to the assignment is assigned to each project role. However, these commitments cannot occur without a delineation of the other two resource elements, namely the skills and the time commitment for each internal staff person, and the associated funding for hardware, software, contract labor, consulting, and so forth. These details will come from the project plan that accompanies the commitment document. In the plan, which should adhere to an accepted project life-cycle management methodology, activities are appropriately detailed along with the duration and performer for each task. The plan will tell the partner-providers what is required of their teams. It is the responsibility of these managers to ensure that they do not overcommit their own personnel. If the IS organization operates some sort of resource management database or tracking system, this may be easily accomplished. Otherwise, it rests with the individual manager to keep things straight. Thus, with this information in hand, when the IS partner providers commit to a role and responsibilities within a given project, this commitment is not “in principle” but is based on detailed skill, date, and duration data. When viewed in its entirety, the commitment process leaves nothing to the imagination of the project team and those they serve. The commitment document makes explicit what is to be done, why the project merits resources, who is responsible for what, and what barriers lie in the 62

Running Information Services as a Business Exhibit 7.

Project Roles

Role

Name of Associate

Responsibility

The Core Project Team: Executive sponsor Working client(s) Project director Project manager Business analyst Application lead Systems lead Data management lead Infrastructure lead Customer services lead Internal and External Partners: Vendor-based project management support Technical architect(s) Business process architect(s) Creative development/UI Development Training/documentation QA/testing Infrastructure Security Other Partner-Provider(s) (Hardware/Software):

path of success. The project plan details how the team will execute their assignment. Together, these documents form a contract that aligns resources and provides for a common understanding of next steps, roles, and responsibilities. The metrics for successful project delivery are few and simple. Did the project come in on time and within budget? Did it meet customer expectations? To answer these questions, all one needs to do is run actual project results against the project’s commitment document and plan. In addition, the team may employ some post-implementation assessment process or survey tool such as the sample in Exhibit 8. 63

ACHIEVING STRATEGIC IT ALIGNMENT Exhibit 8. Post-Implementation Assessment Process +RZsatisfiedDUH\RXZLWK

$

%

&

'

(

)

DScope: $OODJUHHGXSRQEXVLQHVVUHTXLUHPHQWVDUHLQFOXGHG













EQuality: 7KHGHOLYHUHGSURGXFWSHUIRUPVDVH[SHFWHG













FCost: 7KHSURGXFWZDVGHOLYHUHGZLWKLQWKHDJUHHGXSRQFRVW













GTime: 7KHSURGXFWZDVGHOLYHUHGRQWKHDJUHHGXSRQGDWH













D:KDWLV\RXURYHUDOOVDWLVIDFWLRQZLWKWKHV\VWHPVHUYLFH"













E:KDWLV\RXURYHUDOOVDWLVIDFWLRQZLWKVHUYLFHSURYLGHG E\,6"













&ULWLFDOVXFFHVVIDFWRUV³FRQGLWLRQVRIVDWLVIDFWLRQ

2YHUDOOVDWLVIDFWLRQ

2SWLRQDO³&RPSOHWLRQRIWKHIROORZLQJLVRSWLRQDOKRZHYHU\RXUUHVSRQVHVZLOOKHOS XVLQRXUTXHVWIRUFRQWLQXRXVSURGXFWLPSURYHPHQW7KDQN\RX 6\VWHPGDWD

$

%

&

'

(

)

D'DWDDFFXUDF\













E'DWDFRPSOHWHQHVV













F$YDLODELOLW\RIFXUUHQWGDWD













G$YDLODELOLW\RIKLVWRULFDOGDWD













D$FFXUDF\RIFDOFXODWLRQV













E&RPSOHWHQHVVRIIXQFWLRQDOLW\













F6\VWHPUHOLDELOLW\













G6HFXULW\FRQWUROV













D(DVHRIXVH













E6FUHHQGHVLJQ













F5HSRUWGHVLJQ













G6FUHHQHGLWV YDOLGLW\DQGFRQVLVWHQF\FKHFNV













H5HVSRQVHWLPH













I7LPHOLQHVVRIUHSRUWV













D$FFXUDF\RIGRFXPHQWDWLRQ













E&RPSOHWHQHVVRIGRFXPHQWDWLRQ













F8VHIXOQHVVRIGRFXPHQWDWLRQ













G7UDLQLQJ













H2QOLQHKHOS













6\VWHPSURFHVVLQJ

6\VWHPXVH

6\VWHPGRFXPHQWDWLRQDQGWUDLQLQJ

Additional comments or suggestions? (Please use back of form.)

Legend: A = Very satisfied; B = Satisfied; C = Neither satisfied nor dissatisfied; D = Dissatisfied; E = Very dissatisfied; F = Not applicable.

64

Running Information Services as a Business All in all, this framework makes for a good beginning but it does not ensure the success of the project. By being true to the commitment process, many of the problems that might otherwise befall a project will have been addressed proactively. IS METRICS AND REPORTING TOOLS Given all of the work that a typical IS organization is expected to deliver each year, it is easy to see how even major commitments may be overlooked in the furious effort to get things done. To avoid this pitfall, it is incumbent upon IS to clarify its commitments to all concerned. In the prior sections of this chapter, the author has shown how this may be done for both ongoing service and project delivery. Next, IS management must ensure compliance with its commitments once made. Here again, the aforementioned processes keep the team appropriately focused. Service level management requires the integration of performance metrics into each SLA, while the ongoing project-management process forces the team to relate actual accomplishments to plan. During the regular visits of the CREs with their customers, service level and project delivery may be raised with the customers to assess their current satisfaction with IS performance. While each of these activities is important in cementing and maintaining a good working relationship with individual customers, a more comprehensive view is required of how IS services and projects relate to one another. To this end, the author has relied on a single integrated reporting process, called the “Monthly Operations Report,” to capture key IS accomplishments and performance metrics. As its name implies, the Monthly Operations Report is a regularly scheduled activity. The document is entirely customer focused and therefore aligns with both the service level and project commitment management processes. However, it is designed to serve the needs of IS management, keeping customer delivery at the forefront of everyone’s attention and holding IS personnel accountable for their commitments. The report reflects qualitative information from each IS service delivery unit (e.g., help desk, training center, network operations, production services, and so forth) concerning accomplishments of the month. Each accomplishment must be aligned with a customer value (as articulated in SLAs and project commitment documents) if it is to be listed as a deliverable. Next, the report lists quantitative performance data such as system availability, system response time, problem tickets closed, and training classes offered. Note that some of these data points measure activity rather than results and must be balanced with customer satisfaction metrics to be truly meaningful. 65

ACHIEVING STRATEGIC IT ALIGNMENT A system for collecting customer feedback is also needed. This simple surveying process is guided by the following operating principles. First, the process must require no more than two minutes of an individual customer’s time. Second, it must be conducted via the phone or face-to-face, but never via paper forms or e-mail. Third, it must employ measures of customer satisfaction rather than IS activity. Fourth, it must scientifically sample IS’ customer population. And fifth, it must be carried out in a consistent manner on a regular basis. Guided by these principles, the author’s team developed five-question survey tools for each IS service. The team then drew randomly from the help desk customer database where both requests for service and problem tickets are recorded. A single staff member spends the first few days of each month calling customers, employing the appropriate survey scripts. Results are captured in a simple database and then consolidated for the report. These customer satisfaction scores are also tracked longitudinally. All the summary data appears in the monthly operations report. Project delivery is a little more complicated to capture on a monthly basis because projects do not necessarily lend themselves to either quantitative measures or regular customer satisfaction surveying. Nevertheless, the report contains two sets of documents that IS management finds useful. The first is a project master schedule that groups projects by customer portfolio and then by inter-project dependencies. The schedule shows the duration of each project and its status (white for completed, green for on schedule, yellow for in trouble but under control, and red for in trouble). Thus, within a few pages, the IS leadership can see all of the discretionary work that the team has underway at any given time, and what is in trouble, where the bottlenecks are, and who is overcommitted. The presentation is simple and visual. Within the report, each project also has its own scorecard, a single-page representation of that project’s status. Like everything else in the report, the scorecard is also a monthly snapshot that includes a brief description of the project and its value to the customer, a list of customer and project team participants, this month’s accomplishments and issues, a schematic project plan, and a Gantt chart of the current project phase’s activities. Like the master schedule, scorecards are scored white, green, yellow, or red as appropriate. (See Exhibit 9 for a sample scorecard.) In my organization, the monthly operations report is reviewed each month in an open forum by the IS executive team. Other IS personnel are welcome to attend. And within a two- to three-hour block, the entire IS leadership team has a clear picture of the status and health of all existing IS commitments. Follow-up items raised in the review meeting are recorded and appear in the next version of the report. The document itself is distrib66

*UHHQ

3URMHFW6FRUH&DUGDVRI 29(5$//352-(&767$786

67$786.(<

,QWHUQHW8SJUDGH

2EMHFWLYHV7KLVSURMHFWZLOODGGUHVVRYHUDOOFDPSXV ,QWHUQHWFDSDFLW\DQGSHUIRUPDQFHPDQDJHPHQWWKURXJK ERWKLQFUHDVLQJWKHEDQGZLGWKRIWKH8QLYHUVLW\tV,QWHUQHW OLQNVDQGE\SURYLGLQJDPRUHIOH[LEOHUHVSRQVHWRVKLIWVLQ VHUYLFHGHPDQGWKURXJKXVDJHPRQLWRULQJDQGVHUYLFH VKDSLQJWRROVDQGSURFHVVHV 9DOXH3URSRVLWLRQ,QOLQHZLWKWKH8QLYHUVLW\ V FRPPLWPHQWWRDFDGHPLFSURJUDPDQGUHVHDUFKH[FHOOHQFH WKH,QIRUPDWLRQ6HUYLFH'LYLVLRQWKURXJKWKLVSURMHFWZLOO DVVXUHDPRUHUHOLDEOHDQGVFDODEOH,QWHUQHWVHUYLFHLQ UHVSRQVHWRWKHQHHGVRIWKH8QLYHUVLW\FRPPXQLW\

FRPSOHWHG

JUHHQ

([HFXWLYH6SRQVRU0DUN+LOGHEUDQG %XVLQHVV8QLW,6 ,6&XVWRPHU5HODWLRQVKLS([HF1$

3ODQQHG'HOLYHU\'DWH

$FWXDO'HOLYHU\'DWH

3URMHFW 6WDIILQJ

6WDWXV

2&'HOLYHU\DQG7HVW



JUHHQ

7RROV'HOLYHU\DQG7HVW



\HOORZ

7EDFNXS6ROXWLRQ



\HOORZ

)LUHZDOO,PSOHPHQWDWLRQ



JUHHQ

$QDO\VLVRI,QWHUQHW2SWLRQ



QG 2&'HOLYHU\DQG7HVW



5RXWHUDQG)LUHZDOO,QVWDOOV



5HV1HW181HWVHSDUDWLRQ



7HVWDQG&HUWLI\



,VVXHV

5HFRPPHQGHG5HVROXWLRQ

2ZQHU

YDFDQF\LQ1HWZRUN(QJLQHHULQJ LGHQWLILFDWLRQRIPRQLWRULQJWRROVDIWHUIDLOXUHRI VHOHFWHGSDUWQHUSURYLGHU

 VHDUFKXQGHUZD\WRILOOYDFDQF\ VHDUFKXQGHUZD\IRUQHZSDUWQHUSURYLGHU

%RE:KHODQ 0DUN+LOGHEUDQG

6WDWXV \HOORZ \HOORZ

'(/,9(5$%/(6 IRU&XUUHQW3KDVH RIH[HFXWLRQRQO\ 2FW

1RY

'HF

-DQ

2&'HOLYHU\DQG7HVW 7RROV'HOLYHU\DQG7HVW 7%DFNXS6ROXWLRQ )LUHZDOO,PSOHPHQWDWLRQ

67

Exhibit 9. Project Scorecard

)HE

0DU

$SU

0D\

-XQ

-XO

$XJ

6HS

Running Information Services as a Business

2&,QWHUQHWVHUYLFHLQVWDOODWLRQXQGHUZD\ ,QYHVWLJDWLRQRIWUDIILFVKDSLQJDQGPDQDJHPHQWWRROV XQGHUZD\ )LUHZDOOXSJUDGHXQGHUZD\ %DFNXSWHOHFRPPXQLFDWLRQV 7 OLQNSURYLVLRQHG

\HOORZ ,67HDP  3URMHFW'LUHFWRU%RE:KHODQ 3URMHFW0DQDJHU6WHYH7KHDOO %XVLQHVV$QDO\VW3DW7RGG 3URMHFW$QDO\VW5LFKDUG.HVQHU 6\VWHPV/HDGQD 'DWD/HDGQD 4$7HVWLQJ1HWZRUN6HUYLFHV(QJLQHHULQJ ,QIUDVWUXFWXUH1HWZRUN6HUYLFHV7HDP *8,QD 6HFXULW\QD

$332,17('32,173(56216

3URMHFW3KDVH

+LJKOLJKWV

UHG

&XVWRPHU7HDP :RUNLQJ&OLHQW%RE:HLUIRU1(8FRPPXQLW\ :RUNLQJ&OLHQW :RUNLQJ&OLHQW

ACHIEVING STRATEGIC IT ALIGNMENT uted to the entire IS organization via the unit’s intranet site. As appropriate, sections from the report as well as individual project scorecards are shared by the unit’s CREs with their respective customers. In brief, the process keeps accomplishments and problems visible and everyone on their toes. Bear in mind, the focus of this process is continuous improvement and the pursuit of excellence in customer delivery. Blame is never individually assessed because the entire IS team is held accountable for the success of the whole. THE ROLE OF THE PROJECT OFFICE Admittedly, the methods outlined above carry with them a certain level of overhead. On the other hand, the cost of developing and maintaining these processes will prove insignificant when balanced against the payback — high-quality customer relationships and the repeatable success of IS service and project delivery efforts. But the question remains: how does an IS organization achieve the level of self-management prescribed in this chapter? A project office may be needed to ensure the synchronization of service levels and project commitment management for a given organization. The project office should report to the chief operations officer of the IS organization and enjoy a broad mandate in support of all IS service delivery and project management activities. The tasks assigned to the office might include: • Ensuring alignment between IS commitments and the enterprise’s business objectives • Collecting, codifying, and disseminating best practices to service delivery and project teams • Collecting, documenting, and disseminating reusable components such as project plans and budgets, commitment documents, and technical specification templates to project teams • Managing the reporting requirements of IS These assignments embrace both commitment and knowledge management as processes, leaving to IS operating units the responsibility for both actual service and project delivery and for the underlying technical expertise to address customer needs. As part of the project office function, it could also provide actual project managers as staff support to project teams. The role of the project manager might include: • Assisting in maintaining the project plan at the direction of the project director • Maintaining the project issue lists at the direction of the project director 68

Running Information Services as a Business • Maintaining the project change of scope documentation at the direction of the project director • Attending the weekly working team meetings and taking minutes • Attending the CRE and project manager meetings with working clients and business sponsors as need be and taking minutes • Collecting project artifacts (e.g., plans, scripts, best practices, system components) for reuse • Encouraging and promoting reuse within project teams Because these tasks must be accomplished in the course of project delivery anyway, it is more efficient to establish a center of excellence in these skills, whose members can ensure broad-based IS compliance with best practices. Furthermore, nothing about the processes described herein is static. As IS and its customers learn from the use of the aforementioned processes, they will fine-tune and adapt them based upon practical experience. Project office personnel will become the keepers and the chroniclers of this institutional knowledge. And because they operate independently of any particular IS service delivery or project team, the project office staff are in a position to objectively advocate for and monitor the success of commitment management processes. In short, it is this team that will help IS to run like a business, facilitating and supporting the greater team’s focus on successfully meeting customer requirements. CONCLUSION In a world of growing complexity, where time is of the essence and the resources required to effectively deploy IT remain constrained, the use of the frameworks and tools outlined in this chapter should prove useful. The keys to success remain: a solid focus on customer value, persistence in the use of standards, rigorously defined and executed processes, quality and continual communication, and a true commitment to collaborative work. Notes 1. This chapter is based on nearly five years of process-development efforts and field testing at New England Financial Services, the Hurwitz Group, and Northeastern University. The author wishes to acknowledge the many fine contributions of his colleagues to the outcomes described herein. The author also wishes to thank in particular Bob Weir and Rick Mickool of Northeastern University, and Jerry Kanter and Jill Stoff of Babson College, for their aid and advice in preparing this chapter. Any errors of omission or commission are the sole responsibility of the author. 2. Note that some organizations wisely set aside an IT reserve each year in anticipation of unexpected costs such as project overruns, emerging technology investments, and changes in the enterprise’s business plans. 3. One of the primary justifications for service level and commitment management is to address the all-too-familiar phenomenon whereby business leaders commit the enterprise to IT investments without fully appreciating the total cost of ownership associated with

69

ACHIEVING STRATEGIC IT ALIGNMENT

4.

5.

6.

7.

8. 9.

10. 11.

12.

70

their choices. Without proper planning, such a course of action can tie the hands of the IS organization for years to come and expose the enterprise to technological obsolescence. SLAs are created through an annual process to address work on existing IT assets, including all nondiscretionary (maintenance and support) IT costs, such as vendor-based software licensing and maintenance fees, as well as the discretionary costs associated with system enhancements below some threshold amount (e.g., $10,000 per enhancement). Typically, SLA work will at times entail the upgrade costs of system or Web site hardware and software as well as internal and external labor costs, license renewals, etc. The project commitment process governs the system development life cycle for a particular project, encompassing all new IT asset project work, as well as those few systems or Web site enhancements that are greater than the SLA threshold project value. Typically, project work will entail the purchase costs of new system/Web site hardware and software as well as internal and external labor costs, initial product licensing, etc. Once a project deliverable is in production, its ongoing cost is added to the appropriate SLA for the coming year of service delivery. A particularly comprehensive consideration of this subject can be found in Rick Sturm, Wayne Morris, and Mary Jander’s Foundations of Service Level Management (Indianapolis, IN: SAMS, 2000). Unless the enterprise’s leadership has chosen to operate IS as a separate entity with its own profit and loss statement, the author would advocate against the establishment of a charge-back or transfer pricing system between IS and its customers. Instead, the business and IS leaderships should jointly agree on the organization’s overall IT funding level and agree that IS manage those funds in line with the service level and project commitment management processes outlined in this chapter. See Sturm, Morris, and Jander, pp. 189–196. For example, an SLA that resembles a formal business contract is appropriate and necessary for a multi-operating unit enterprise where each line of business runs on its own P&L and must be charged back for the IS services that it consumes. On the other hand, such an SLA might only confuse and frighten the executives of an institution of higher education who are unaccustomed to formal and rigorous modes of business communication. As is so often the case, if the IS help desk or call center is the customer’s initial point of entry into IS support services, this bears repeating throughout the SLA. It is particularly important that IS representatives communicate to business unit management what they need to do as part of their partnership with IS to ensure the success of the services reflected in the SLA. This is a customer communication and education process. It is not to be dictated but needs to be agreed to. The particulars of the responsibility list will vary from one organization to another. The following list of customer responsibilities is meant only for illustration purposes: (1) to operate within the information technology funding allocations and funding process as defined by enterprise management; (2) to work in close collaboration with the designated IS customer relationship executive to initially frame this SLA and to manage within its constraints once approved; (3) to collaborate throughout the life cycle of the project or process to ensure the ongoing clarity and delivery of business value in the outcomes of the IT effort, including direct participation in and ownership of the quality assurance acceptance process; (4) to review, understand, and contribute to systems documentation, including project plans and training materials, as well as any IS project or service team communications such as release memos; (5) throughout the life cycle of the process, to evaluate and ultimately authorize business applications to go into production; (6) to distribute pertinent information to all associates within the business unit who utilize the products and services addressed in this SLA; (7) to ensure that business unit hardware and associated operating software meet or exceed the business unit’s system-complex minimum hardware and software requirements; (8) to report problems using the problem-reporting procedure detailed in this service level agreement, including a clear description of the problem; (9) to provide input on the quality and timeliness of service; (10) to prioritize work covered under this service agreement and to provide any ongoing prioritization needed as additional business requirements arise. The key IS service metrics of concern to most customers include system availability, system response time, mean time to failure, mean time to service restoration, support services availability, and as cited above, response time from support staff when problems occur

Running Information Services as a Business

13. 14.

15.

16.

17.

18.

or when hands need to be held. While typical customers would like an immediate respond to their issues, they will recognize the resource constraints of IS as long as they understand in advance the standards of service under which IS operates. However, IS should take pains to ensure that “availability” and “response time” reflect the total customer experience and not some subset of service. For example, the network is not restored from a customer perspective merely because the servers are back up online. The customer’s applications must be restored as well so that regular business may be transacted. The executive sponsor is the senior executive to whom the SLA is addressed and whose general responsibility it is to approve the use of IS resources as described in the SLA. It is absolutely essential that the business and not IS rule on project priorities. But this does not absolve the IS team of the responsibilities for both consolidating and leveraging IT requests that in their view bring the greatest benefit to the enterprise, and identifying infrastructure and other IT-enabling investments that are a necessary foundation for business-enabling IT projects. The project director is the IS party responsible for project delivery and the overall coordination of internal and external information technology resources. The director will work hand-in-hand with the working clients to ensure that project deliverables are in keeping with the customer’s requirements. The IS project manager is staff to the IS project director. This support person will develop and maintain project commitment documents and plans, facilitate and coordinate project activities, carry out business process analysis, prepare project status reports, manage project meetings, record and issue meeting minutes, and perform many other tasks as required to ensure successful project delivery. Some organizations will allow working clients to serve as IS project directors and even project managers. In the author’s view, this is a mistake. While the working client is essential to any IS project’s success, contributing system requirements and business process expertise to the effort, very few working clients have experience in leading multi-tiered IT projects, especially those involving outside technical contractors and consultants. Leave this work to an appropriately skilled IS manager, allowing working clients to contribute where they add greatest value. The levels of commitment run from “request” where the customer asks for IS help, to “speculation” where IS responds based on a series of suppositions, to “offer” where IS nails down its assumptions, to “commit” where both the customer and IS are in a position to formally commit resources to the project.

71

This page intentionally left blank

Chapter 6

Managing the IT Procurement Process Robert L. Heckman

An IT procurement process, formal or informal, exists in every organization that acquires information technology. As users of information systems increasingly find themselves in roles as customers of multiple technology vendors, this IT procurement process assumes greater management significance. In addition to hardware, operating system software, and telecommunications equipment and services — information resources traditionally acquired in the marketplace — organizations now turn to outside providers for many components of their application systems, application development and integration, and a broad variety of system management services. Yet despite this trend, there has to date been little, if any, research investigating the IT procurement process. DEVELOPMENT OF THE FRAMEWORK In January 1994, the Society for Information Management (SIM) Working Group on Information Technology Procurement was formed to exchange information on managing IT procurement, and to foster collaboration among the different professions participating in the IT procurement process. This chapter presents a model of the IT procurement process which was developed by the SIM Working Group to provide a framework for studying IT procurement. Specifically, the IT Procurement Process Framework was developed by a 12-member subgroup comprised of senior IT procurement executives from large North American companies. The task of developing the framework took place over the course of several meetings and lasted approximately one year. A modified nominal group process was used, in which individual members independently developed frameworks that described the IT procurement process as they understood it. In a series of several work sessions, these individual models were synthesized and combined to produce the six-process framework presented below. Once the six major procurement processes had been identified, a modified nominal group process was once again followed to elicit the sub0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

73

ACHIEVING STRATEGIC IT ALIGNMENT

'HSOR\PHQW 3URFHVVHV

0DQDJHPHQW 3URFHVVHV

Exhibit 1.

5HTXLUHPHQWV 'HWHUPLQDWLRQ

$FTXLVLWLRQ

&RQWUDFW )XOILOOPHQW

'

'

'

6XSSOLHU0DQDJHPHQW

0

$VVHW0DQDJHPHQW

0

4XDOLW\0DQDJHPHQW

0

Major Processes in IT Procurement

processes to be included under each major process. Finally, a nominal group process was once again used to elicit a set of key issues, which the group felt presented managerial challenges in each of the six processes. The key issues were conceived of as the critical questions that must be successfully addressed to effectively manage each process. Thus, they represent the most important issues faced by those executives responsible for the management of the IT procurement function. The process framework and key issues were reviewed by the Working Group approximately one year later (summer 1996), and modifications to definitions, sub-processes, and key issues were made at that time. The key issue content analysis described below was conducted following a Working Group review in early 1997. THE IT PROCUREMENT FRAMEWORK: PROCESSES, SUB-PROCESSES, AND KEY ISSUES The IT Procurement Process Framework provides a vehicle to systematically describe the processes and sub-processes involved in IT procurement. Exhibit 1 illustrates six major processes in IT procurement activities: three deployment processes (D1, D2, and D3) and three management processes (M1, M2, and M3). Each of these major processes consists of a number of sub-processes. The Appendix at the end of this chapter lists the subprocesses included in each of the major processes, as well as the key issues identified by the Working Group. Deployment Processes Deployment processes consist of activities that are performed (to a greater or lesser extent) each time an IT product or service is acquired. Each individual procurement can be thought of in terms of a life cycle that begins with requirements determination, proceeds through activities involved in the actual acquisition of a product or service, and is completed as the 74

Managing the IT Procurement Process terms specified in the contract are fulfilled. Each IT product or service that is acquired has its own individual iteration of this deployment life cycle. D1. Requirements determination is the process of determining the business justification, requirements, specifications and approvals to proceed with the procurement process. It includes sub-processes such as organizing project teams, using cost–benefit or other analytic techniques to justify investments, defining alternatives, assessing relative risks and benefits defining specifications, and obtaining necessary approvals to proceed with the procurement process. D2. Acquisition is the process of evaluating and selecting appropriate suppliers and completing procurement arrangements for the required products and services. It includes identification of sourcing alternatives, generating communications (such as RFPs and RFQ) to suppliers, evaluating supplier proposals, and negotiating contracts with suppliers. D3. Contract fulfillment is the process of managing and coordinating all activities involved in fulfilling contract requirements. It includes expediting of orders, acceptance of products or services, installation of systems, contract administration, management of post-installation services such as warranty and maintenance, and disposal of obsolete assets. Management Processes Management processes consist of those activities involved in the overall governance of IT procurement. These activities are not specific to any particular procurement event, but rather are generalized across all such events. Three general classes of IT procurement management processes are supplier management, asset management, and quality management. M1. Supplier management is the process of optimizing customer–supplier relationships to add value to the business. It includes activities such as development of a supplier portfolio strategy, development of relationship strategies for key suppliers, assessing and influencing supplier performance, and managing communication with suppliers. M2. Asset management is the process of optimizing the utilization of all IT assets throughout their entire life cycle to meet the needs of the business. It includes activities such as development of asset management strategies and policies, development and maintenance of asset management information systems, evaluation of the life cycle cost of IT asset ownership, and management of asset redeployment and disposal policies. M3. Quality management is the process of assuring continuous improvement in the IT procurement process and in all products and services acquired for IT purposes in an organization. It includes activities 75

ACHIEVING STRATEGIC IT ALIGNMENT Exhibit 2. Eight Themes Identified from 76 Key Issues Code P M ER IR S L F E

Theme Process management, design, and efficiency Measurement, assessment, evaluation (of vendor and self) External relationships (with supplier) Internal relationships (internal teams, roles, communication) Strategy and planning Legal issues Financial, total cost of ownership (TCO) issues Executive support for procurement function

No. Key Issues

21 16 9 9 7 6 6 2

such as product testing, statistical process control, acceptance testing, quality reviews with suppliers, and facility audits. KEY IT PROCUREMENT MANAGEMENT ISSUES The Appendix at the end of the chapter presents 76 key IT procurement management issues organized by process, that were identified by the members of the Working Group. These issues represent the beliefs of these domain experts concerning the most serious challenges facing managers of the IT procurement function. To better understand the key issues, a content analysis was performed to determine if there were a few main themes underlying these questions. The content analysis identified eight themes. The eight themes, their codes, and the frequency of occurrence are shown in Exhibit 2. These theme codes also appear in the Appendix for each key issue. The four themes that were the most important to senior procurement managers in the SIM Working Group are described below. Process Management, Design, and Efficiency [P] Practicing IT procurement managers are most concerned with the issue of how to make the procurement process more efficient. The questions that reflect this theme address the use of automated tools such as EDI and procurement cards, reduction of cycle time in contracting processes, development and use of asset tracking systems and other reporting systems, and the integration of sub-processes at early and later stages of the procurement life cycle. The emergence of process efficiency as the leading issue may indicate that procurement managers are under pressure to demonstrate the economic value of their organizational contribution, and thus follow the last decade’s broad management trend of rigorously managing costs. Measurement, Assessment, Evaluation [M] The second most important theme concerns the search for reliable and valid ways to evaluate and assess performance. This search for useful 76

Managing the IT Procurement Process assessment methods and measures is directed both at external suppliers and at the internal procurement process itself. The latter focus is consistent with the notion that procurement managers are looking for objective ways to assess and demonstrate their contribution. The focus on supplier assessment reflects an understanding that successful supplier relationships must be built on a foundation of high-quality supplier performance. External Relationships [ER] and Internal Relationships [IR] The third and fourth most frequently cited themes deal with the issue of creating effective working relationships. The importance of such relationships is an outgrowth of the cross-functional nature of the IT procurement process within organizations and the general transition from internal to external sources for information resource acquisition. Venkatraman and Loh (1994) characterize the information resource acquisition process as having evolved from managing a portfolio of technologies to managing a portfolio of relationships. The results of our analysis suggest that practicing managers agree. A MANAGEMENT AGENDA FOR THE IT PROCUREMENT PROCESS The process framework and key issues identified by the SIM IT Procurement Working Group suggest an agenda for future efforts to improve the management of the IT procurement process. The agenda contains five action items that may best be carried out through a collaboration between practicing IT procurement managers and academic researchers. The action items are: 1. Develop IT procurement performance metrics and use them to benchmark the IT procurement process. 2. Clarify roles in the procurement process to build effective internal and external relationships. 3. Use the procurement process framework as a tool to assist in reengineering the IT procurement process. 4. Use the framework as a guide for future research. 5. Use the framework to structure IT procurement training and education. Develop IT Procurement Performance Metrics and Use Them to Benchmark the IT Procurement Process. Disciplined management of any process requires appropriate performance metrics, and members of the Working Group have noted that good metrics for the IT procurement processes are in short supply. The process framework is currently providing structure to an effort by the Working Group to collect a rich set of performance metrics that can be used to raise the level 77

ACHIEVING STRATEGIC IT ALIGNMENT of IT procurement management. In this effort, four classes of performance metrics have been identified: 1. 2. 3. 4.

Effectiveness metrics Efficiency metrics Quality metrics Cycle time metrics

Closely related to the metrics development issue is the need felt by many procurement professionals to benchmark critical procurement processes. The framework provides a guide to the process selection activity in the benchmarking planning stage. For example, the framework has been used by several companies to identify supplier management and asset management sub-processes for benchmarking. Clarify Roles in the Procurement Process to Build Effective Internal and External Relationships IT procurement will continue to be a cross-funtional process that depends on the effective collaboration of many different organizational actors for success. Inside the customer organization, representatives of IS, legal, purchasing, finance, and user departments must work together to buy, install, and use IT products and services. Partnerships and alliances with supplier and other organizations outside the boundaries of one’s own firm are more necessary than ever as long-term outsourcing and consortia arrangements become more common. The key question is how these multifaceted relationships should be structured and managed. Internally, organizational structures, roles, standards, policies, and procedures must be developed that facilitate effective cooperation. Externally, contracts must be crafted that clarify expectations and responsibilities between the parties. Recent research, however, suggests that formal mechanisms are not always the best means to stimulate collaboration. The most useful forms of collaboration are often discretionary –– that is, they may be contributed or withheld without concern for formal reward or sanction (Heckman and Guskey, 1997). Formal job descriptions, procedures, and contracts will never cover all the eventualities that may arise in complex relationships. Therefore, managers must find the cultural and other mechanisms that create environments which elicit discretionary collaboration both internally and externally. Use the Procurement Process Framework as a Tool to Assist in Reengineering the IT Procurement Process Another exciting use for the framework is to serve as the foundation for efforts to reengineer procurement processes. One firm analyzed the subprocesses involved in the requirements analysis and acquisition stages of 78

Managing the IT Procurement Process the procurement life cycle to reduce procurement and contracting cycle time. Instead of looking at the deployment sub-processes as a linear sequence of activities, this innovative company used the framework to analyze and develop a compression strategy to reduce the cycle time in its IT contracting process by performing a number of sub-processes in parallel. Use the Framework as a Guide for Future Research The framework has been used by the SIM IT Procurement Working Group to identify topics of greatest interest for empirical research. For example, survey research investigating acquisition (software contracting practices and contracting efficiency), asset management (total life-cycle cost of ownership and asset tracking systems), and supplier management (supplier evaluation) has been recently completed. The key issues identified in the current chapter can likewise be used to frame a research agenda that will have practical relevance to practitioners. Use the Framework to Structure IT Procurement Training and Education The framework has been used to provide the underlying structure for a university course covering IT procurement. It also provides the basis for shorter practitioner workshops, and can be used by companies developing in-house training in IT procurement for users, technologists, and procurement specialists. This five-item agenda provides a foundation for the professionalization of the IT procurement discipline. As the acquisition of information resources becomes more market oriented and less a function of internal development, the role of the IT professional will necessarily change. The IT professional of the future will need fewer technology skills because these skills will be provided by external vendors that specialize in supplying them. The skills that will be critical to the IT organization of the future are those marketplace skills which will be found in IT procurement organizations. The management agenda described in this chapter provides a first step toward the effective leadership of such organizations. References and Further Reading Barki, H., Rivard, S., and Talbot, J. (1993), “A Keyword Classification Scheme for IS Research Literature: An Update,” MIS Quarterly, (I 7:2) June, 1993, 209–226. Davenport, T., and Short, J. (1990), “The New Industrial Engineering: Information Technology and Business Process Redesign,” Sloan Management Review, Summer, 11–27. Hammer, M. (1990), “Reengineering Work: Don’t Automate, Obliterate,” Harvard Business Review, July/August, 104–112. Heckman, R. and Guskey, A. (1997), “The Relationship Between University and Alumni: Toward a Theory of Discretionary Collaborative Behavior,” Journal of Marketing Theory and Practice.

79

ACHIEVING STRATEGIC IT ALIGNMENT Heckman, R., and Sawyer, S. (1996), “A Model of Information Resource Acquisition,” Proceedings of the Second Annual American Conference on Information Systems, Phoenix, AZ. Lacity, M. C., Willcocks, L. P., and Feeny, D. F., “IT Outsourcing: Maximize Flexibility and Control,” Harvard Business Review, May–June 1995, p. 84–93. McFarlan, F. W. and Nolan, R. L., (1995). “How to Manage an IT Outsourcing Alliance,” Sloan Management Review, (36:2), Winter, p. 9–23. Reifer, D. (1994) Software Management, Los Alamitos CA: IEEE Press. Rook, P. (1986) “Controlling Software Projects,” Software Engineering Journal, pp. 79–87. Sampler, J. and Short, J. (1994), “An Examination of Information Technology’s Impact on the Value of Information and Expertise: Implications for Organizational Change,” Journal of Management Information Systems (1:2), Fall, p.59–73. Teng, J., Grover, V., and Fiedler, K. (1994), “Business Process Reengineering: Charting a Strategic Path for the Information Age,” California Management Review, Spring, 9–31. Thayer, R. (1988), “Software Engineering Project Management: A Top–Down View,” in R. Thayer (ed.), IEEE Proceedings on Project Management, Los Alamitos CA: IEEE Press, p. 15–53. Vaughn, M. and Parkinson, G. (1994) Development Effectiveness, New York: John Wiley & Sons. Venkatraman, N. and Loh, L. (1994), “The Shifting Logic of the IS Organization: from Technical Portfolio to Relationship Portfolio,” Information Strategy: The Executive’s Journal, Winter, p. 5–11.

80

Managing the IT Procurement Process

Appendix Major Processes, Sub-Processes, and Key Issues

Deployment Process D1: Requirements Determination Process Definition

The process of determining the business justification, requirements, specifications, and approvals to proceed with the procurement process. Sub-processes

y Identify need. y Put together cross-functional team and identify roles and responsibilities. y Continuously refine requirements and specifications in accordance with user needs. y Gather information regarding alternative solutions. y Perform cost-benefit analysis or other analytic technique to justify expenditure. y Evaluate alternative solutions (including build/buy, in-house/outsource, etc.) and associated risk and benefits. y Develop procurement plans that are integrated with project plans. y Gain approval for the expenditure. y Develop preliminary negotiation strategies. Key Issues [Themes]

y What are the important components of an appropriate procurement plan? [S] y How much planning (front-end loading) is appropriate or necessary for different types of acquisitions (e.g., commodity purchases versus complex, unique acquisitions)? [S] y How should project teams be configured for different types of acquisitions (appropriate internal and external resources, project leader, etc.)? [IR] y How should changes in scope and changes in orders be handled? [P] y What are the important costs versus budget considerations? [F] y What are the most effective methods of obtaining executive commitment? [E] y Can requirements be separated from wants? [P] y Should performance specifications and other outputs be captured for use in later phases, such as quality management? [P] 81

ACHIEVING STRATEGIC IT ALIGNMENT Deployment Process D2: Acquisition Process Definition

The process of evaluating and selecting appropriate suppliers and completing procurement arrangements for the required products and services. Sub-processes

y Develop sourcing strategy, including the short list of suitable suppliers. y Generate appropriate communication to suppliers (RFP, RFQ, etc.), including financing alternatives. y Analyze and evaluate supplier responses and proposals. y Plan formal negotiation strategy. y Negotiate contract. y Review contract terms and conditions. y Award contract and execute documents. y Identify value added from the negotiation using appropriate metrics.

Key Issues [Themes]

y Is there support of corporate purchasing programs, policies, and guidelines (which can be based on technology, financing, accounting, competitive impacts, social impacts, etc.)? [E] y What tools optimize the procurement process? [P] € EDI € Autofax € Procurement cards y What processes in the acquisition phase can be eliminated, automated, or minimized? [P] y Is it wise to be outsourcing all or part of the procurement process? [IR] y What are the appropriate roles of users, legal, purchasing, and IS in the procurement process? [IR]

Deployment Process D3: Contract Fulfillment Process Definition

y The process of managing and coordinating all activities involved in fulfilling contract requirements.

Sub-processes

y Expedite orders and facilitate required changes. y Receive material and supplies, update databases, and reconcile discrepancies. y Accept hardware, software, or services. y Deliver materials and services as required, either direct or to drop-off points. y Handle returns. y Install hardware, software, or services.

82

Managing the IT Procurement Process y y y y y

Administer contract. Process invoices and issue payment to suppliers. Resolve payment problems. Manage post-installation services (e.g., warranty, maintenance, etc.). Resolve financial status and physical disposal of excess or obsolete assets. y Maintain quality records. Key Issues [Themes]

y What are some provisions for early termination and renewals? [L] y What are the best methods for assessing vendor strategies for ongoing maintenance costs? [ER] y What interaction between various internal departments aids the processes? [IR]

Management Process M1: Supplier Management Process Definition

The process of optimizing customer-supplier relationships to add value to the business Sub-processes

y Categorize suppliers by value to the organization (e.g., volume, sole source, commodity, strategic alliance). Allocate resources to most important (key) suppliers. y Develop and maintain a relationship strategy for each category of supplier. y Establish and communicate performance expectations that are realistic and measurable. y Monitor, measure, and assess vendor performance. y Provide vendor feedback on performance metrics. y Work with suppliers to improve performance continuously; know when to say when. y Continuously assess supplier qualifications against requirements (existing and potential suppliers). y Ensure relationship roles and responsibilities are well-defined. y Participate in industry/technology information sharing with key suppliers. Key Issues [Themes]

y How does anyone distinguish between transactional/tactical and strategic relationships? [ER] y How can expectations on both sides be managed most effectively? Should relationships be based on people-to-people understandings, or 83

ACHIEVING STRATEGIC IT ALIGNMENT

y

y y y y y y y y y

y y y y y

solely upon the contractual agreement (get it in writing)? What is the right balance? [ER] How can discretionary collaborative behavior — cooperation above and beyond the letter of the contract — be encouraged? Are true partnerships with vendors possible, or does it take too long? What defines a partnership? [ER] How should multiple vendor relationships be managed? [ER] How should communication networks (both internal and external) be structured to optimize effective information exchange? Where are the most important roles and contact points? [IR] How formal should a measurement system be? What kind of report card is effective? What are appropriate metrics for delivery and quality? [M] What is the best way to continuously assess the ability of a vendor to go forward with new technologies? [M] What legal aspects of the relationship are of most concern (e.g., nondisclosure, affirmative action, intellectual property, etc.)? [L] What is the best way to keep current with IT vendor practices and trends? What role does maintaining market knowledge play in supplier management? [M] What is the optimal supplier–management strategy for a given environment? [S] How important is the development of master contract language? [L] In some sectors there is an increasing number of suppliers and technologies, although in the others vendor consolidation is occurring. Under what circumstances should the number of relationships be expanded or reduced? [ER] What are the best ways to get suppliers to buy into master agreements? [L] What are the best ways to continuously judge vendor financial stability? [M] Where is the supplier positioned in the product life cycle? [M] How should suppliers be categorized (e.g., strategic, key, new, etc.) to allow for prioritization of efforts? [M] What are the opportunities and concerns to watch for when one IT supplier is acquired by another? [M]

Management Process M2: Asset Management Process Definition

The process of optimizing the utilization of all IT assets throughout their entire life cycle to meet the needs of the business 84

Managing the IT Procurement Process Sub-processes

y Develop and maintain asset management strategies and policies. Identify and determine which assets to track; they may include hardware, software licenses, and related services. y Implement and maintain appropriate asset management databases, systems, and tools. y Develop a disciplined process to track and control inventory to facilitate such things as budgeting, help desk, life-cycle management, software release distribution, capital accounting, compliance monitoring, configuration planning, procurement leverage, redeployment planning, change management, disaster recovery planning, software maintenance, warranty coverage, lease management, and agreement management. y Identify the factors that make up the total life-cycle cost of ownership. y Communicate a software license compliance policy throughout the organization. Key Issues [Themes]

y What assets are included in IT asset management (e.g., human resources, consumables, courseware)? [F] y How can legal department holdups be reduced? [P] y What is the best way to communicate corporatewide agreements? [IR] y How should small ticket assets be handled? [P] y How does a company move from reactive to proactive contracting? [S] y Are there ways of dealing with licenses that require counts of users? [L] y What are the best ways of managing concurrent software licensing? [L] y Can one be contracting for efficiency using national contracts for purchase, servicing, licensing? [P] y How can software be managed and tracked as an asset? [F] y How can the workload in software contracting be reduced? [P] y Are there ways to encourage contract administration to be handled by the vendor? [P] y Is it possible to manage all three life cycles simultaneously: technical, functional, and economical? [S] y How does a company become proactive in risk management? [S] y What is the appropriate assignment of internal responsibilities (e.g., compliance)? [IR] y Do all items need to be tracked? [P] y How much control (a) can the company afford? (b) does the company need? (c) does the company want? [F] y What are the critical success factors for effective asset management? [S] 85

ACHIEVING STRATEGIC IT ALIGNMENT y What practices are most effective for the redeployment of assets? [P] y Are there adequate systems available to track both hard and soft assets? Are there any integrated solutions (support, tracking, and contract management)? [P] y What are the best ways to handle the rapid increase in volume and rapid changes in technology? [P] y What is the appropriate reaction to dwindling centralized control of the desktop with nonconformance to guidelines and procedures? [IR] y Is there a true business understanding of the total cost of ownership over the entire life cycle of an asset? [F] y What are the impacts on organizational structure? [IR] y What kind of reporting is most effective? [P] y How can one manage tax issues — indemnification, payments, and insurance issues? [F] y What issues should be considered in end-of-lease processes? [P] Management Process M3: Quality Management Process Definition

The process of assuring continuous improvement in all elements of the IT procurement framework Sub-processes

y Define and track meaningful process metrics on an ongoing basis. y Conduct periodic quality reviews with suppliers: € Provide formal feedback to vendors on their performance. € Facilitate open and honest communication in the process. y Collect and prioritize ideas for process improvement. y Use formal quality improvement efforts involving the appropriate people: € Participants may include both internal resources and vendor personnel. y Recognize and reward quality improvement results on an ongoing basis: € Recognize nonperformance/unsatisfactory results. y Audit vendors’ facilities and capabilities. y Conduct ongoing performance tests against agreed upon standards (e.g., acceptance test, stress test, regression test, etc.). y Utilize appropriate industry standards (e.g., ISO 9000, SEI Capability Maturity Model). y Periodically review vendors’ statistical process control data.

86

Managing the IT Procurement Process Key Issues [Themes]

y What is the best way to drive supplier quality management systems? [ER] y What is the appropriate mix of audits (supplier/site/regional, etc.) for quality and procedural conformance? [M] y What is the importance of relating this process to the earliest stages of the requirement determination process? [P] y What corrective actions are effective? [P] y When and how is it appropriate to audit a supplier’s financials? [M] y What is an effective way to audit material or services received? [M] y What is the best way to build quality assurance into the process, as opposed to inspecting for quality after the fact? [P] y What metrics are the most meaningful quantitative measures? [M] y How can one best measure qualitative information, such as client satisfaction? [M] y When should one use surveys, and how can they be designed effectively? [M] y How often should measurements be done? [M] y How does one ensure that the data collected is valid, current, and relevant? [M] y What is the best medium and format to deliver the data to those who need it? [P] y What are used as performance and quality metrics for the IT procurement function? [M] y How does one effectively recognize and reward quality improvement? [ER] y When is it time to reengineer a process rather than just improve it? [P] y How much communication between vendor and customer is needed to be effective? [ER]

87

This page intentionally left blank

Chapter 7

Performance Metrics for IT Human Resource Alignment Carol V. Brown

A key asset of any organization is its human resources. In the late 1990s, attracting, recruiting, and retaining IT workers became a major challenge for human resource managers, and many IT organizations established their own specialists to manage this asset. Today, the supply of IT professionals is more in balance with the demand, and managers need to turn their attention to proactively aligning their IT human resources with the organization’s current and future needs. The objective of this chapter is to present some of the issues involved in designing performance metrics to better align the IT organization with the business. The chapter begins with a high-level goal alignment framework. Then some guidelines for selecting what to measure, and how to measure, are presented. A case example is then used to demonstrate some practices in detail. The chapter concludes with a short discussion of best practices and ongoing challenges. IT PERFORMANCE ALIGNMENT A major assumption underlying the guidelines in this chapter is that organizations align their performance metrics with their goals. As shown in the framework in Exhibit 1, IT performance metrics are directly aligned with the performance goals for an IT organization. The IT performance goals are aligned with the goals of the organization via the goals for the IT function as well as the goals for the human resources (HR) function. In many organizations, alignment with the goals of the HR function is achieved by assigning an HR specialist to the IT organization. Recently, 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

89

ACHIEVING STRATEGIC IT ALIGNMENT

Organizational Goals

HR Function Goals

IT Function Goals

IT Performance Goals

IT Performance Metrics

Exhibit 1. Goal Alignment Framework

many IT organizations have also implemented a matrix reporting relationship for an HR specialist, which creates an accountability to both the IT functional head and the HR head. Another trend has been assigning traditional HR tasks to one or more IT managers. Both of these approaches have become more prevalent as the recruiting, rewarding, and retention of the IT workforce has become recognized as too critical to leave in the hands of HR specialists whose accountability is only to the HR function. IT organizations typically have well-established metrics for IT delivery performance — both IT application project delivery metrics and IT service delivery metrics. For example, the traditional success metrics for a new IT systems project are on-time, within-budget delivery of a high-quality system with the agreed-upon functionality and scope. However, IT organizations that only track IT delivery metrics are not necessarily aligning their IT human resources with the organization’s goals. Other IT performance metrics that should be captured are IT human capital effectiveness measures, such as (1) the desired inventory of internal IT skillsets, (2) the optimal number of external (contract) employees as a percentage of the total IT workforce, and (3) the ideal turnover rate for internal IT employees to ensure knowledge retention as well as an infusion of new skills. These human capital metrics are context driven. For example, ideal turnover rates vary greatly, and even within a given organization the ideal turnover rate for the IT function may be significantly different than the ideal turnover rates for other functions. An average ideal turnover rate for IT workers just above 8 percent was recently reported, based on a sample of more than a hundred U.S.-based manufacturing and service companies.1 However, only about half of those surveyed (48 percent) were able to 90

Performance Metrics for IT Human Resource Alignment

IT Work Environment Effectiveness IT People

IT Processes

IT Delivery Goals Human Capital Goals

Exhibit 2. What to Measure

attain their goal during a time in which there was perceived to be an acute shortage of IT professionals (mid-year 1998). Further, IT managers need to explicitly set guidelines for the ideal “balance” between IT delivery goals and IT human capital goals for their managers to achieve. This is because these two sets of IT goals are often in conflict. For example, repeatedly assigning the most knowledgeable technical resources to a maintenance project because of the in-depth knowledge of that resource may not help that person grow his or her technology skills: the project might be a success but the IT organization’s target inventory of new IT skills for future projects might be jeopardized. WHAT TO MEASURE A framework for thinking about key categories of metrics to assess an IT organization’s performance is shown in Exhibit 2. The outcome goals toward the right of the exhibit include the above-mentioned goals of IT delivery and IT human capital, and the desired balance of these potentially conflicting goals. Metrics are also needed for three other IT organization factors that impact these IT effectiveness goals: characteristics of the IT people (IT workforce), the in-place IT processes (including IT HR processes), and the IT work environment. Each of these categories is described below. IT People Metrics. Of importance here are metrics that capture progress

toward the development of skill proficiencies of the IT workforce and how well IT personnel are currently being utilized, in addition to customer satisfaction with IT resources. 91

ACHIEVING STRATEGIC IT ALIGNMENT IT Process Metrics. These metrics assess the quality of the processes being used to accomplish the IT work, including the effectiveness of IT human resource processes in developing human capital. Typical examples of the latter are recruiting effectiveness metrics, the number of training hours it takes for an employee to achieve a certain level of proficiency with a given technology, and the time it takes to staff a project with the requisite skills. IT Work Environment Metrics. Researchers have consistently found work environment variables to be highly valued by IT workers, including the extent to which they have opportunities to learn about new technologies and the extent to which they find their work to be challenging. More recent surveys of IT workers have also found two other workplace characteristics to be highly valued: opportunities to telecommute and flexible work hours.

Finally, achieving alignment between IT performance metrics and the multiple goals of the IT organization requires not only an investment in metrics design programs, but also an investment in periodic evaluation programs to ensure that the metrics being collected are also helping to achieve the desired behaviors and outcomes. That is, unintended behaviors can sometimes result due to deficiencies in a metrics program. We return to this important idea in the case example below. HOW TO MEASURE Determining how to measure an outcome or behavior requires careful consideration of several design characteristics and their trade-offs. Based on a synthesis of writings by thought leaders (e.g., Kaplan and Norton,2 S.A. and A.M. Mohrman3), these design characteristics can be grouped into four categories. Each category is described below, and examples are provided in Exhibit 3. Criteria for Measurement. The best criterion with which to measure an IT performance variable depends on the intended purpose of the performance metric. For example, is the metric to be used as the basis for a merit award, or is to be used for communications with business unit stakeholders? Is it a team-based or an individual worker metric? Source(s) for Measurement. For each measurement criterion, the best source for the metric needs to be selected. First of all, this will depend on the appropriate level of measurement; for example, is it a project, skillset group, or individual level metric? Can the metric be collected automatically as a part of a regular IT work process — for example, as part of a project log, project document, or computer-based training system? Is only one source needed, or will multiple sources be asked to measure the same variable — and if multiple sources, how will they be selected? For example, 92

Performance Metrics for IT Human Resource Alignment Exhibit 3.

How to Measure

Criteria

Source(s)

• • • • • • •

• • • • Collection Method

Frequency

Aligned with strategy For communication, analysis, rewards To internal and external audiences Team based or individual Time based Ratios or absolute Single versus multiple sources — Multiple projects, matrix reports — Multiple levels — Example: 360°above, lateral, below Employee reports — Potential for rater errors or bias Employee logs Project documentation Automated capture

• Quantitative items — Counts and ratios — Scaled (categorical, Likert-type, bimodal) • Qualitative items — Open-ended questions — Anecdotal accounts (stories) • Survey administration (paper versus electronic) • In-person interviews (formal and informal) • Periodic, annual • Continuous but aggregated (week, month, quarter, bi-annual) • On-demand

in some organizations that have moved to a 360-degree individual performance appraisal system, IT workers help choose employees above them, below them, and peers who will be asked to provide formal evaluations of their work. Collection Method. Even after the measurement criteria and source are determined, design decisions associated with the methods to collect the performance data may still need to be carefully assessed. Some of the design choices may differ based on whether a quantitative measure or a qualitative measure is more appropriate. Quantitative metrics include counts and ratios, as well as scaled items. Common scales for capturing responses to sets of items include bimodal scales (with labels provided for two endpoints of a continuum) and Likert-type scales (such as a scale from 1 to 5 with labels provided for multiple points). Qualitative metrics can be collected as responses to open-ended questions, or as anecdotal accounts (or stories), which could yield insights that otherwise would not have been 93

ACHIEVING STRATEGIC IT ALIGNMENT tapped into. The choice of data collection methods can also have significant cost implications. For example, collection of data from targeted individuals via a survey may be less costly (and less time-intensive) than interview-based methods, and it also allows for anonymous responses. Data collected via a survey form is usually less rich and more difficult to interpret than data collected via telephone or in-person interview methods. Frequency. Another design consideration with major cost implications is the frequency with which to collect a given metric. Some metrics can be collected on an annual basis as part of an annual performance review process or an annual financial reporting process. Other metrics can be collected much more frequently on a regular basis — perhaps even weekly. Weekly metrics may or may not also be aggregated on a monthly, quarterly, or biannual basis. The most effective programs capture metrics at various appropriate time periods, and do not solely rely on annual processes to evaluate individual and unit performance.

The schedule for metric collection, as well as the mechanisms used for reporting results, must also be continuously reevaluated. Some IT organizations have adopted a “scorecard” approach, not unlike the “dashboard” templates that have been adopted for reporting critical business metrics. CASE EXAMPLE: IT PERFORMANCE METRICS AT NATURAL Natural is a large, Fortune 500-sized company competing in the energy industry, that embarked on an IT metrics redesign initiative as part of an organizational restructuring. The IT workforce had been receiving poor ratings in IT delivery and customer satisfaction. The new IT function goals were to achieve a high perceived value for IT delivery, customer service, and workforce utilization, as well as to build an IT talent pool with IT, business, and leadership skills. Exhibit 4 shows how Natural’s IT performance goals were aligned with the organization’s overall goals via not only IT function goals but also the HR function goals. Top management wanted its HR leaders to foster the development and retention of organizational talent to enable the company to meet its aggressive targets for profitability and growth within the context of a rapidly changing world. One of the new core values was to motivate workers with relevant incentives and performance metrics. Natural’s primary IT performance goals were to improve its IT capabilities and to improve perceptions of the value provided by the IT organization. IT application development resources that had been working within IT groups at the business division level were re-centralized to a corporate IT group to focus on improving IT capabilities. Each IT professional was assigned to a skillset group (center of excellence). Each center had one or 94

Performance Metrics for IT Human Resource Alignment

Organizational Goal: Exceed aggressive profitability targets and grow

HR Function Goals: Instill Core Values: - Vision, Decision Rights - Virtues and Talents - Incentives and Measures

-

-

IT Function Goals: High perceived value for IT delivery, customer service, and utilization Build IT talent for IT, business, leadership skills

IT Performance Goals: Focus on improving IT capabilities and perceived value Reduced number of contract employees

IT Performance Metrics for Systems Development Unit: Utilization People Development Technology Leverage Financials Project Execution Customer Satisfaction

Exhibit 4.

Aligning Goals and Metrics at Natural

more “coaches” who were responsible for training programs and personnel scheduling that would help their workers to hone their new IT skills. A related goal was to reduce the number of contract employees. As shown in Exhibit 4, six categories of IT performance metrics were identified as critical for this systems development unit to achieve its new IT performance goals: utilization, people development, technology leverage, customer satisfaction, project execution, and financials. The utilization metrics made explicit Natural’s new thrust on rebalancing its human capital and delivery goals: the specific metrics included time spent on personnel development and retooling. Metrics in two other categories specifically tracked performance gains for people development and technology leverage. Examples of specific metrics implemented for five of the categories (all but Financials) are provided in Exhibit 5. Because of the clear linkage between building IT talent and the organization’s new set of core values, the IT leaders gained approval from its business customers for pricing the group’s IT services with a 10 percent markup, in order to have a funding mechanism for investments in IT workforce development, improved IT processes, and state-of-the-art technologies. Two templates were completed for each of the specific metrics. The first template explicitly linked the IT unit’s vision and strategy to the metric category. Behaviors that the IT managers expected to be impacted by these metrics were also identified in detail. For example, for the Utilization metric 95

ACHIEVING STRATEGIC IT ALIGNMENT Exhibit 5. Specific Metrics by Category at Natural Utilization • Total utilization percentage • Total development percentage • Number of retoolings • Number of days to fill personnel requisitions • Contractor numbers People Development • Employee perception of development • Average company experience, time in role • Dollars spent on people development Technology Leverage • Employee proficiency level by tool category • Tool learning curve (time to be proficient) • Technology demand level Customer Satisfaction • Improvement in communication • Accurate time, cost, and scope estimates • Management of change issues • Ability to meet customer quality expectations • Ability to meet customer value expectations Project Execution (Delivery) • Predictability (on-time, under budget) • Quality (maintenance cost versus projected) • Methodology compliance • Resource optimization level • Project health rating (risk management) • Number of trained project managers

category, the anticipated behavioral impacts included improved planning of resource needs, improved estimating of project time and costs, and better decision making about external hires (contractors) by IT HR managers. In addition, however, potentially unintended behaviors were also identified; that is, behaviors that could be inadvertently reinforced by introducing this new metric category. For example, measuring the utilization of employees from a given center of excellence could lead to over-scheduling of employees in order to achieve a high performance rating, but at the cost of less individual progress toward skill-building — because training typically took place during unscheduled “bench-time.” Another potential downside was that the supervisor (coach) might assign an employee who was on-the-bench to a project when she or he was not the best resource for 96

Performance Metrics for IT Human Resource Alignment the project assignment, in order to achieve a high resource utilization rating. In this situation, higher utilization of IT workers might be achieved, but at the expense of project delivery and customer satisfaction goals. A second template was used to document characteristics of each specific metric, such as: • • • • • •

General description of the measure What is measured (specific data elements and how they are related) Why this data was being measured (what it would be used for) The measurement mechanism (source and time period) How the measure is calculated (as relevant) The target performance level or score

Of the six metric categories, two were measured monthly (utilization, and financials); three quarterly (people development, customer satisfaction, and project execution); and one biannually (technology leverage). Four new people development measures were baselined first because it was a new category of metrics that was considered key to demonstrating the success of the newly centralized IT unit. The technology leverage measures were baselined using a self-assessment survey of employees in conjunction with estimated demands for specific technologies and technology skillsets for both current and anticipated IT projects. Whenever possible, intranet-based Web forms were used for data collection. Although there was an emphasis on quantitative measures, qualitative measures were also collected. For example, a special form was available to internal customers to make it easy to collect “success stories.” Natural assigned one IT manager to be the “owner” of each metric category, not only as a category owner during the initial design and implementation of the relevant specific metrics, but also on an ongoing basis. Given the potential for influencing unintended behaviors (as described above), each owner was responsible for monitoring the behavioral impacts of each specific metric in that category. The owner was also relied upon to provide insight into the root causes of missed targets. Over the long term, metric owners would also be held accountable for anticipating potential changes to the efficacy of a specific metric due to changes in goals and processes that occurred inside and outside the IT organization. BEST PRACTICES The metrics project at Natural is a successful case example of quickly developing new metrics to incent IT workers for new behaviors to meet new IT performance goals within a systems development context. It also demonstrates several “best practices” that this author has identified from 97

ACHIEVING STRATEGIC IT ALIGNMENT more than a dozen case examples and readings on human resource management, as follows. Align IT Metrics with Organizational Goals and Processes. I f y o u d o n o t link metrics to an organization’s vision and goals, you only have facts. Natural’s IT managers explicitly linked each performance metric with the IT function’s goals and multiple metrics categories. In addition, the HR program to instill new core values was reinforced by the emphasis that Natural’s IT managers placed on their own metrics initiative. Finally, the company’s aggressive profitability and growth goals were communicated to the IT workforce so that each employee could see why not only IT delivery but also IT human capital development were IT organization goals that were aligned with the goals of the business. Focus on a Salient, Parsimonious Set of Metrics. B y f o c u s i n g o n t h e achievement of six categories of performance metrics, Natural’s IT managers could more easily communicate them to their IT workforce and business customers. Their templates helped them make decisions about which specific metrics in each category should be introduced first, taking into account the potential relationships across metrics categories. Recognize Motivators and Inhibitors. By explicitly stating the relation-

ships between each metric and brainstorming the intended, and unintended, behaviors from introducing a new metrics category, Natural’s IT managers had a head-start at recognizing potential inhibitors to achieving a given performance goal. Incorporate Data Collection into Work Practices. Performance measurements are not cost-free. By incorporating data collection into regular work processes, costs can be minimized and the monitoring of their collection can be minimized. Further, if customer satisfaction with a given project team is collected at regular points in the project, the data is likely more meaningful (and action-able) than if the project satisfaction data is only collected as part of an annual customer satisfaction survey process. Assign “Owners” and Hold Regular Reviews to Identify Unintended Behaviors and Inhibitors. By assigning ownership of each metric category to one

IT manager, the likelihood of early identification of unintended behaviors and impacts due to other changes within the IT organization is considerably higher. Because the metrics initiative was new at Natural, regular posthoc reviews were part of the original metrics project. However, the danger for all organizations is that after an initial implementation period is over, metric monitoring may be forgotten. 98

Performance Metrics for IT Human Resource Alignment Remember: “You Get What You Reinforce.”4 If on-time delivery is the only

metric that is visibly tracked by management, do not be surprised when project teams sacrifice system quality to finish the project on time. ONGOING CHALLENGES Although the performance demands for an IT organization, and therefore its performance metrics, need to continually evolve, several common challenges for designing metrics can also be anticipated. First, it is difficult to show progress when no baseline has been established. One of the early tasks in an IT metrics (re)design program is to establish a baseline for the metric. But, depending on the metric, an internal baseline may take three months, or six months, or longer. Too often, IT organizations undertake major transformation efforts but neglect to take the time to identify and capture “before” metrics (or at least “early” metrics) so that progress can be quantified. In some cases, continuing to also collect “old” metrics will help to show interim progress. Another common challenge is paying enough attention to People and Process metrics when the organization is faced with aggressive project delivery timelines. In most situations, IT human capital initiatives will always take second place to IT delivery demands. This means that attaining a more “balanced” approach in which more weight is given to IT people issues can only be achieved if this is a goal clearly communicated from the top of the organization. Although team-based metrics have become more common, the difficulties encountered with moving from an employee appraisal process based on individual-level metrics to one based on team-level metrics should not be underestimated. For example, it is not uncommon for people who are accustomed to individual rewards to not feel equitably treated when they are rewarded based on the performance of a team or workgroup.3 HR experts have suggested that an employee’s perception of equity increases when the reward system is clearly understood and there are opportunities to participate in group-based efforts to improve group performance. A related challenge is how to develop a set of metrics that will reinforce both excellent team-based outcomes as well as exceptional individual talent and innovation. Finally, today’s increasingly attractive technical options for telework and virtual teaming offer a new kind of flexibility in work arrangements that is likely to be valued by workers of multiple generations, faced with different work–home balance issues, not just Generation X workers who thrive on electronic communications. However, one of the key challenges to 99

ACHIEVING STRATEGIC IT ALIGNMENT implementing telework arrangements is a metrics issue: moving away from behavior metrics to outcome performance metrics. CONCLUSION Human resources are strategic assets that need to be aligned with the goals of the organization. Whether IT human resource skills are plentiful, or scarce, developing metrics that reward desired people behaviors and performance is a strategic capability that deserves increased IT management attention. References 1. Agarwal, R., Brown, C.V., Ferratt, T.W., and Moore J.E., SIM Member Company Practices: Recruiting, Retaining, and Attracting IT Professionals, June 1999 (www.simnet.org). 2. Kaplan, R.-S. and Norton, D.-P., The Balanced Scorecard: Translating Strategy into Action, Harvard Business School Press, Boston, 1996. 3. Mohrman, S.A. and Mohrman, A.M., Jr., Designing and Leading Team-Based Organizations: A Workbook for Organizational Self-Design, Jossey-Bass, San Francisco, 1997. 4. Luthans, F. and Stajkovic, A.D., “Reinforce for Performance: The Need to Go Beyond Pay and Even Rewards,” Academy of Management Executive, 13(2), 49–57, 1999.

Other Suggested Reading Mathis, R.L. and Jackson, J.H., Human Resource Management, 9th edition, South-Western College Publishing, Cincinnati, OH, 2000.

ACKNOWLEDGMENTS The author is grateful to the members of the ICEX Knowledge Exchange for IT Human Resources for sharing their insights on the topic, including the ICEX group leaders Sarah B. Kaull and Kelly Butt.

100

Chapter 8

Is It Time for an IT Ethics Program? Fritz H. Grupe Timothy Garcia-Jay William Kuechler

Technologists often think of themselves as involved in activities that have no ethical implications. They do not see their systems as being good or bad, or right or wrong, in and of themselves. Neither, in many instances, do they feel that these issues are part of their responsibility. Let someone else decide whether they want to use the system. Let them decide how the system should be deployed or how the data collected might be reused. But ethical questions intrude themselves into IT operations whether anyone wants them to or not. The recent designation of St. Isidore of Seville by the Pope as the patron saint of the Internet does not eliminate the need for organizational as well as personal ethics in the area of information technology. Consider the ethical issues raised when discussions focus on questions such as: • Is it permissible for client-related, personally identifiable data to be used, traded, and sold? • Assuming that a company has the legal right to monitor electronic mail, can this mail be read by specific people (i.e., the immediate supervisor, the IT manager, the corporate lawyer)? • Can employee data be shared with an insurance company? • Are systems that store personal data vulnerable to computer hacking? • Should multiple conversational language programs be introduced simultaneously or as they become ready? • What responsibility do technicians have to report “suspicious,” perhaps pornographic, files on corporate microcomputers? • Should tracking software be used to monitor employee movements on the Internet? • At what point do your e-mails to customers become unwelcome spam? 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

101

ACHIEVING STRATEGIC IT ALIGNMENT To be sure, these ethical issues may also have legal and practical implications. Nonetheless, IT personnel should not approach these issues as though their actions are ethically neutral.They are not. Few IT workers know that professional organizations such as the Association for Computing Machinery and the Institute for Electronics and Electrical Engineers have promulgated codes of ethics. Of those who do, even fewer know how to apply the codes and have entered into serious conversations about the ethical trade-offs they may be required to consider. Most IT workers consider themselves ethical, but ethical decision making requires more than just a believing that you are a good person. It also requires sensitivity to the ethical implications of decisions. Further, ethical discussions rarely receive the depth of analysis they deserve. Many of these questions demand an ability to evaluate issues with complex, ambiguous, and incomplete facts. What is right is not necessarily what is most profitable or cheapest for the company. Ethical decision making requires ethical commitment, ethical consciousness, and ethical competency. Currently, the ethical framework of IT is based primarily on the tenets of individual ethics. This is problematical, however, because it suffers from sins of omission (i.e., forgetting to ask relevant questions) and sins of commission (i.e., being asked to undertake unethical actions and not being able to invoke personal ethical standards by a superior). Many governmental IT agencies are implementing formal approaches to raising, discussing, and resolving ethical questions.The time may be ripe to discuss doing the same in business IT departments. WHY AN ETHICS PROGRAM? If management believes that it and its employees are basically ethical, why is a formal ethics program worth pursuing? Perhaps the strongest among many motivations for this effort is the desire to make ethical behavior standard practice within the organization. Employees under pressure to economize, to reach more clients, and to produce more revenue may begin to feel that unethical practices are implied — if not even encouraged. An ethics program announces management’s commitment to ethical behavior in all aspects of the IT effort. It encourages people to adopt and pursue high ethical standards of practice. Knowledge that ethical issues are being debated motivates them to identify these issues and make them visible. It shapes their behavior so that they act ethically and have confidence that management will back them when they take ethically correct actions whenever possible. By considering positive ethical positions prior to the development of new computer systems, the incorporation of conscious decision making in the system can be managed when change is least costly and before damage from an ethically indefensible system is incurred. The support of professional codes of ethics promotes the image of IT workers as professionals in pursuit of reputable goals. Moreover, it has been often observed that good ethics is good business. The customers of an ethical business soon 102

Is It Time for an IT Ethics Program? come to see that the company is committed to providing the best possible service. Ethical considerations should be openly and thoroughly discussed when systems are being implemented that affect the company’s workforce. They also loom larger when they affect vulnerable populations, the poor, or the under-educated. Although it should not be a special consideration, the prospect that spin-off effects of a system might bring the organization into the public perception encourages special attention to the ethical basis of a system and whether or not it can stand public scrutiny. HOW TO ORGANIZE ETHICS AS A PROGRAM IT management is complex, driven by many forces, and subject to issues with a growing number of ethical implications. When conducting daily business activities as a manager, maintaining high personal ethics is extremely important, but maintaining high organizational ethics must be every employee’s responsibility as well. To that end, we suggest that IT organizations adopt an ethics program to help their staff become aware of and deal with these issues. Building and adopting an organizational ethics program cannot make people ethical, but it does help them make better decisions. The benefits accrue to employees who are treated ethically as much as it does to customers and clients. One ethicist suggests that an ethics program includes the need to: • • • • • • • • •

Establish organizational roles to manage ethical issues. Schedule ongoing assessment of ethics requirements. Establish required operating values and behaviors. Align organizational behaviors with operating values. Develop awareness and sensitivity to ethical issues. Integrate ethical guidelines to decision making. Structure mechanisms to resolve ethical dilemmas. Facilitate ongoing evaluation and updates to the program. Help convince employees that attention to ethics is not just a kneejerk reaction done to get out of trouble or to improve one’s corporate public image.

The number and magnitude of challenges facing IT organizations are unprecedented. Ethical issues that contribute to the anxiety of IT executives, managers, and staff are dealt with every day. Included in the sources of this angst are pressure to reduce costs, mergers and acquisitions, financial and other resource constraints, and rapid advances in IT technologies that complicate and often hide the need for ethical decision making during system design, development, and implementation. However, people cannot and should not make such decisions alone or without a decision-making framework. IT organizations should have vehi103

ACHIEVING STRATEGIC IT ALIGNMENT cles, such as a code of ethics and an ethics program, to assist with the decision-making process. Perhaps the precise steps presented here are not as important as the initiation of some well-demarcated means by which to inaugurate a conscious, ethical decision-making process. What is important is not so much the need for an academically defined methodology as the need for IT to adopt a disciplined methodology to deal with ethical decision making. Individuals in the organization need to reflect on the mission and values of IT and use that as a guide, either by itself or in concert with a defined methodology. PRINCIPLES OF ETHICS Before identifying a few core ethical principles that should be taken into account in evaluating a given issue, it is necessary to distinguish between ethical and moral assessments (questions of right and wrong) and ostensibly related principles. Legal principles, for example, impose sanctions for improper actions. One may find that what makes an action right and what makes it legal are different, perhaps even in conflict. It is also important to note that what is politically or technically desirable and what is ethical may not be the same. Guiding ethical principles set standards for the organization that go beyond the law in such areas as professional ethics, personal ethics, and general guiding principles. These principles will not always dictate a single, ethically acceptable course of action, but they help provide a structure for evaluating and resolving competing ethical claims. There are many tools and models for financial and logistic decision making, but few guides to indicate when situations might have an ethical implication. Yet this awareness is a crucial first step before decisions are made. Recognizing the moral context of a situation must precede any attempt to resolve it. Exhibit 1 displays the most commonly asserted ethical principles — generic indicators to be used as compelling guides for an active sense of right and wrong. For each principle, an example is given of an ethical issue that might be raised by people using this principle. STRATEGIES FOR FOSTERING AN ETHICAL ORGANIZATION Adopt the Goal of Implementing an Ethics Program To implement a successful ethics program at any level, executive leadership is desirable on the part of the president of the organization. Within IT, an ethics program will need the equally public support of the IT director. Both executives must be committed to offering leadership in this arena. Public and unequivocal statements supporting the attainment of ethical goals should be promoted as a general goal of the company and of IT. 104

Is It Time for an IT Ethics Program? Exhibit 1. Selected Ethical Bases for IT Decision Making Golden rule: Treat others as you wish to be treated. • Do not implement systems that you would not wish to be subjected to yourself. • Is your company using unlicensed software although your company itself sells software? Kant’s categorical imperative: If an action is not right for everyone, it is not right for anyone. • Does management monitor call center employees’ seat time, but not its own? Descartes’ rule of change (also called, the slipper y slope): If an action is not repeatable at all times, it is not right at any time. • Should your Web site link to another site, “framing” the page, so users think it was created and belongs to you? Utilitarian principle (also called universalism): Take the action that achieves the most good. Put a value on outcomes and strive to achieve the best results. This principle seeks to analyze and maximize the IT of the covered population within acknowledged resource constraints. • Should customers using your Web site be asked to opt in or opt out of the possible sale of their personal data to other companies? Risk aversion principle: Incur least harm or cost. Given alternatives that have varying degrees of harm and gain, choose the one that causes the least damage. • If a manager reports that a subordinate criticized him in an e-mail to other employees, who would do the search and see the results of the search? Avoid harm: Avoid malfeasance or “do no harm.” This basis implies a proactive obligation of companies to protect their customers and clients from systems with known harm. • Does your company have a privacy policy that protects, rather than exploits customers? No free lunch rule: Assume that all property and information belongs to someone. This principle is primarily applicable to intellectual property that should not be taken without just compensation. • Has your company used unlicensed software? • Or hired a group of IT workers from a competitor? Legalism: Is it against the law? Moral actions may not be legal, and vice versa. • Might your Web advertising exaggerate the features and benefits of your products? • Are you collecting information illegally on minors? Professionalism: Is an action contrary to codes of ethics? Do the professional codes cover a case and do they suggest the path to follow? • When you present technological alternatives to managers who do not know the right questions to ask, do you tell them all they need to know to make informed choices? Evidentiary guidance: Is there hard data to support or deny the value of taking an action? This is not a traditional “ethics” value but one that is a significant factor related to IT’s policy decisions about the impact of systems on individuals and groups. This value involves probabilistic reasoning where outcomes can be predicted based on hard evidence based on research. • Do you assume that you know PC users are satisfied with IT’s service, or has data been collected to determine what they really think?

105

ACHIEVING STRATEGIC IT ALIGNMENT Exhibit 1.

Selected Ethical Bases for IT Decision Making (continued)

Client/customer/patient choice: Let the people affected decide. In some circumstances, employees and customers have a right to self-determination through the informed consent process. This principle acknowledges a right to self-determination in deciding what is “harmful” or “beneficial” for their personal circumstances. • Are your workers subjected to monitoring in places where they assume that they have privacy? Equity: Will the costs and benefits be equitably distributed? Adherence to this principle obligates a company to provide similarly situated persons with the same access to data and systems. This can imply a proactive duty to inform and make services, data, and systems available to all those who share a similar circumstance. • Has IT made intentionally inaccurate projections as to project costs? Competition: This principle derives from the marketplace where consumers and institutions can select among competing companies, based on all considerations such as degree of privacy, cost, and quality. It recognizes that to be financially viable in the market, one must have data about what competitors are doing and understand and acknowledge the competitive implications of IT decisions. • When you present a build or buy proposition to management, is it fully aware of the risk involved? Compassion/last chance: Religious and philosophical traditions promote the need to find ways to assist the most vulnerable parties. Refusing to take unfair advantage of users or others who do not have technical knowledge is recognized in several professional codes of ethics. • Do all workers have an equal opportunity to benefit from the organization’s investment in IT? Impartiality/objectivity: Are decisions biased in favor of one group or another? Is there an even playing field? IT personnel should avoid potential or apparent conflicts of interest. • Do you or any of your IT employees have a vested interest in the companies that you deal with? Openness/full disclosure: Are persons affected by this system aware of its existence, aware of what data is being collected, and knowledgeable about how it will be used? Do they have access to the same information? • Is it possible for a Web site visitor to determine what cookies are used and what is done with any information they might collect? Confidentiality: IT is obligated to determine whether data it collects on individuals can be adequately protected to avoid disclosure to parties whose need to know is not proven. • Have you reduced security features to hold expenses to a minimum? Trustworthiness and honesty: Does IT stand behind ethical principles to the point where it is accountable for the actions it takes? • Has IT management ever posted or circulated a professional code of ethics with an expression of support for seeing that its employees act professionally?

106

Is It Time for an IT Ethics Program? Establish an Ethics Committee and Assign Operational Responsibility to an Ethics Officer An ethics infrastructure links the processes and practices within an organization to the organization’s core mission and values. An ethics infrastructure promotes a means by which to invite employees to raise ethical concerns without fear of retribution and to demonstrate that the company is interested in fostering ethical conduct. It is a mechanism that reflects a desire to infuse ethics into decision making. First, establish an IT Ethics Committee, the purpose of which is to provide a forum for the improvement of IT and organizational ethics practices. This group, which may not be limited to the IT staff, should include people who possess knowledge and skills in applied ethics. The members should have appropriate knowledge of systems development to assist developers as they create systems that are ethically valid. The members themselves, and especially the chief ethics officer, should be seen as having personal characteristics that are consistent with the functions of the committee. That is, they should be respected, personally honest, of high integrity and courage, ethical, and motivated and committed to creating an ethical organization. The basic functions of the committee include: • The education of IT staff as to the nature and presence of ethical issues and to alert them to methods of dealing with these issues • The recommendation of and the oversight of policies guiding the development of new computer systems and the reengineering of old computer systems • The increase of staff, client, and customer satisfaction due to the deployment of ethically defensible systems • The identification of key system features that avoid institutional and individual liability • The encouragement and support of ethical standards of practice, including the creation of practices that remove ethical uncertainty and conflicts Given that most ethical questions in IT are related to systems development and maintenance practices and data privacy, adequate time to consider the issues at stake is not as significant an issue as it might be in other organizations. At a hospital, for example, ethical issues may take new forms everyday. The committee must have the prestige and authority to effect changes in system development and to keep the affected employees free of reprisals from managers whose priorities and (un)ethical principles might otherwise hold sway. Means should be found to reward rather than punish people who identify ethical problems. This may enable them to focus on broader organizational issues as well as IT conflicts specifically. The committee needs to be proactive in the identification of emerging ethical issues that not all IT personnel have come to anticipate. Initial tasks of the committee 107

ACHIEVING STRATEGIC IT ALIGNMENT and the Chief Policy Officer (CPO) are generally not difficult to determine. They should seek to clearly define the organization’s privacy policy, its security policy, and its workplace monitoring policy. Adopt a Code of Ethics Examine the codes of ethics from the Institute for Electronics and Electrical Engineers and the Association for Computing Machinery. Other codes are also available. Adopt one of the codes as the standard for your IT group as a means of promoting the need for individuals to develop their concern for ethical behavior. Make The Ethics Program Visible Post the code of ethics prominently and refer to it as decisions are being made so that people can see that its precepts have value. Similarly, let IT workers know of decisions made and of issues being discussed so that they gain experience with the processes in place and so that they understand that ethics are of compelling interest to the company. Let them know how ethical errors might have been made in the past, but have been removed or eliminated. Show gratitude to people who raise issues, rather than treating them as troublemakers. Provide occasional workshops on ethical questions as a part of an ongoing in-service training effort to better inform people about how they should proceed if a question arises and also to advertise your efforts more effectively. Establish a Reporting Mechanism For people to raise ethical concerns requires that they feel comfortable doing so. This should be possible even if a supervisor does not wish to see the question raised. Let people know how they can raise an issue without fear of dismissal or retaliation. Conducting Ethical Analysis How does one analyze ethical questions and issues? There are both quantitative and qualitative approaches to this task. The ethics committee must first develop a clear set of mission statements and value statements. Nash, writing for the Harvard Business Review, suggests that participants in a policy discussion of this nature consider the following questions: • Have you defined the problem accurately? • How would you define the problem if you stood on the other side of the fence? • How did this situation occur in the first place? • To whom and to what do you give your loyalty as a person and as a member of the corporation? 108

Is It Time for an IT Ethics Program? • • • • • •

• •

What is your intention in making this decision? How does this intention compare with the probable result? Whom could your decision or action injure? Can you discuss the problem with the affected parties before you make your decision? Are you confident that your position will be as valid over a long period of time as it seems now? Could you disclose without qualm your decision or action to your boss, your CEO, the board of directors, your family, society as a whole? What is the symbolic potential of your action if understood? Misunderstood? Under what conditions would you allow exceptions to your stand?

Such questions are likely to generate many useful discussions, both formal and informal, as questions such as those noted earlier are being reviewed or reevaluated. Consider a Board Committee on Ethics A large company might consider creating a sub-committee on ethics from within the board of directors. This committee would view ethical questions that affect other functional areas such as marketing and financial reporting. Review and Evaluate Periodically determine whether the structures and process in place make sense. Are other safeguards needed? Were recommendations for ethical behavior carried out? Have structural changes elsewhere in the company caused a need to reassess how the program is working and how it can be improved? CONCLUSION Current business literature emphasizes that organizational ethics is not a passing fad or movement. Organizational ethics is a management discipline with a programmatic approach that includes several practical tools. As stated, it is not imperative that this discipline has a defined methodology. However, organizational ethics do need to consist of knowledge of ethical decision making; process skills that focus on resolving value uncertainty or conflict as it emerges in the organization; the ability to reflect, both professionally and personally, on the mission, vision, and values of IT units; and an ethical commitment from the board of trustees and executive leaders. An ethical organization is essential for quality IT and for successful organizations. 109

ACHIEVING STRATEGIC IT ALIGNMENT Based on an exhaustive literature review and comparison of industry standards, we believe it is important that IT develop an organizational ethics discipline that is communicated throughout the organization, from the top down, and as an integral part of daily business operations. It is invaluable to have a process and a structure that guide decisions on questions such as the extent to which it is the company’s responsibility to guard against identity theft, to prevent software piracy in all of its offices — no matter how widely distributed — to protect whistleblowers should the need arise, or to limit the causes of repetitive stress injuries. An ethics program seeks to encourage all personnel to become attentive to the ethical implications of the work in which they are engaged. Once they are conscious of the potentially serious ethical implications of their systems, they begin to consider what they can do to attain ethically responsible goals using equally responsible means to achieve those ends. They incorporate into their thinking the implications other professionals bring to the profession’s attention. Most importantly, ethical perspectives become infused into the operations of the IT unit and the corporation generally. It is clear that ethical organizations do not emerge without the presence of leadership, institutional commitment, and a well-developed program. Further, ethical organizations that have clearly presented mission and values statements are capable of nurturing ethically grounded policies and procedures, competent ethics resources, and broader corporate support for ethical action. It is time for an ethics program in IT. References and Further Information Baase, Sara. A Gift of Fire: Social, Legal and Ethical Issues in Computing, Upper Saddle River, NJ: Prentice-Hall, 1997. Bowyer, Kevin. Ethics and Computing: Living Responsibly in a Computerized World, New York: IEEE Press, 2001. Business Ethics: Managing Ethics in the Workplace and Social Responsibility, http://www.mapnp. org/library/ethics/ethics.htm. Johnson, Deborah G. Computer Ethics, Upper Saddle River, NJ: Prentice-Hall, 1994. Nash, Laurel. “Ethics without the Sermon,” Harvard Business Review, 59, 1981. Spinello, Richard. Cyberethics: Morality and Law in Cyberspace, Sudbury, MA: Jones and Bartlett, 1999. Stacey, L. Edgar. Morality and Machines: Perspectives on Computer Ethics, Sudbury, MA: Jones and Bartlett, 1999.

110

Chapter 9

The CIO Role in the Era of Dislocation James E. Showalter

Peter Drucker has suggested that the role of the CIO has become obsolete. His argument suggests that information technology has become so mission critical for reaching the company’s strategic goals that its responsibility will be ultimately subsumed by the CEO or the CFO. After years of viewing information technology as an excessive but “necessary cost,” executive management has now awakened to the recognition that failing to embrace and manage “dislocating” information technologies can mean extinction. A dislocating technology is defined as a technological event that enables development of products and services whose impact creates completely different lifestyles or commerce. The Internet has been such a dislocating force, and others are on the horizon. Navigating these dislocations requires leadership and vision that must span the total executive staff, not just the CIO. This, I believe, is Drucker’s point: The management of dislocating technologies transcends any individual or organization and must become integral to the corporate fabric. However, I also believe there is still an important role, albeit a different role, for the CIO in the 21st-century enterprise. In his recent book, The Innovator’s Dilemma — When New Technologies Cause Great Firms to Fail, Clayton Christensen provides a superb argument for corporate leadership that takes the company to new enhanced states enabled by technological dislocations. The Silicon Valley success stories have been entrepreneurs who recognize the market potential of dislocations created by technology. I believe the 21st-century CIO’s most important role is to provide entrepreneurial leadership during these periods of dislocation for the company. FROM PUNCTUATED EQUILIBRIUM TO PUNCTUATED CHAOS? Evolutionary biologist Stephen Jay Gould theorizes that the continuum of time is periodically “punctuated” with massive events or discoveries that 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

111

ACHIEVING STRATEGIC IT ALIGNMENT create dislocations of the existing state of equilibrium to a new level of prolonged continuous improvement (i.e., punctuated equilibrium). The dinosaurs became painfully aware of this concept following the impact of the meteorite into the Yucatan peninsula. In an evolutionary sense, the environment has been formed and shaped between cataclysmic dislocations — meteorites, earthquakes, droughts, plagues, volcanoes, and so on. Although exact scenarios are debatable, the concept is plausible even from events occurring in our lifetime. There are many examples of analogous technological discoveries and innovations (the internal combustion engine, antibodies, telephone service, interstate highway system, etc.) that promoted whole new arrays of products and possibilities that forever changed commerce and lifestyles. In each of these examples, our quality of life improved through the conveniences these technologies enabled. The periods between dislocations are getting shorter. For example, the periods between the horse, the internal combustion engine, and the fuel cell took a century, whereas the transformations between centralized computing, distributed computing, desktop computing, network computing, and ubiquitous computing have occurred in about 40 years. In the next century, technological dislocations in communications, genetics, biotechnology, energy, transportation, and other areas will occur in even shorter intervals. In fact, change is expected so frequently that Bill Gates has suggested that our environment is actually in constant change or upheaval marked by brief respites — “punctuated chaos” rather than punctuated equilibrium. We are currently in the vortex of a dislocation or transition period that many companies will not survive in the 21st century. With certainty, many new companies, yet unidentified, will surface and replace many of the companies currently familiar to us. No company is exempt from this threat, even the largest and most profitable today. The successes will be those that best leverage the dislocating technologies. To protect their companies from extinction, CIOs must understand the economic potentials and consequences of dislocating technologies. THE ERA OF NETWORK COMPUTING We are currently experiencing a new technological dislocation that embodies the equivalent or possibly greater potential of any previous innovation. This new dislocation is network computing, or perhaps a better nomenclature, ubiquitous communications. Network computing involves the collaborative exchange of information between objects, both human and inanimate, through the use of electronic media and technologies. Although network computing could arguably be attributed to early telecommunications applications in which unsophisticated display terminals were attached to mainframe computers through a highly proprietary communi112

The CIO Role in the Era of Dislocation cations network, the more realistic definition begins with the Internet. Moreover, thinking must now focus on anything-to-anything interchange and not be limited only to human interaction. Navigating this transition will challenge every company — a mission for the CIO. From today’s vantage, networking computing includes (1) the Internet and Internet technologies and (2) pervasive computing and agent technologies. The Internet and Internet Technologies The compelling and seductive power of the Internet has motivated all major worldwide enterprises to adopt and apply Internet technologies within their internal networks under local auspices. These private networks, called intranets, are rapidly becoming the standard communications infrastructure spanning the total enterprise. Intranets are indigenous and restricted to the business units that comprise the enterprise. They are designed to be used exclusively by employees and authorized agents of the enterprise in such a way that the confidentiality of the enterprise’s data and operating procedures are protected. Ingress and egress to and from intranets are controlled and protected by special gateway computers called firewalls. Gateway services, called portal services, now enable the enterprise to create a single portal to its network of internal Web sites representing specific points of interest that the company allows for limited or public access. In general, the development and stewardship of intranets are under the auspices of the CIO. Whereas the Internet conceptually initiated the possibilities afforded by network computing to an enterprise, it is the intranets that have enabled the restructuring or reengineering of the enterprise. Essentially all major enterprises have launched intranet initiatives. Due largely to ease of implementation and low investment requirements, enterprises are chartering their CIOs to implement intranets posthaste and without time-consuming cost justifications. In most cases, enterprises are initially implementing intranets to provide a plethora of “self-service” capabilities available to all or most employees. In addition to the classic collaboration services (e-mail, project management, document management, and calendaring), administrative services such as human resource management and financial services have been added that enable employees to manage their respective portfolios without the intervention of service staffs. This notion enables former administrative staffs to be transformed into internal consultants, process specialists, and other more useful positions for assisting in the successful implementation of major restructuring issues, staff retraining, and, most important, the development of a new corporate culture. Over time, all applications, including mis113

ACHIEVING STRATEGIC IT ALIGNMENT sion-critical applications, will become part of the intranet. Increasingly, these duties are being outsourced to trusted professional intranet specialists. Clearly, CIOs must provide the leadership in the creation and implementation of the company’s intranet. Companies in the 21st century will be a network of trusted partners. Each partner will offer specific expertise and capabilities unavailable and impractical to maintain within the host or nameplate company. Firms producing multiple products will become a federation of subsidiaries, each specific to the product or services within its market segment. Each company will likely require different network relationships with different expert providers. This fluidity is impossible within the classical organizational forms of the past. To meet these growing requirements and to remain profitable, companies are forced to reduce operating costs and develop innovative supply chain approaches and innovative sales channels. Further, in both the business-to-business (buy side) and the business-to-customer (sell side) supply chains, new “trusted” relationships are being formed to leverage supplier expertise such that finished products can be expedited to the customer. Initially, this requirement has motivated enterprises to “open” their intranets to trusted suppliers (buy side) and to dealers, brokers, and customers (sell side) to reduce cycle times and cost. These extended networks are called extranets. However, the cost of maintaining extranets is extreme and generally limited to large host companies. In addition, lowertier suppliers and partners understandably resist being “hard wired,” maintaining multiple proprietary relationships with multiple host companies. This form of extranet is unlikely to persist and will be replaced by a more open approach. Industry associations such as SITA (Société Internationale de Télécommunications Aéronautiques) for the aerospace industry and the Automotive Network Exchange (ANX) for the automotive industry have recognized the need for a shared environment in which companies within a specific industry could safely and efficiently conduct commerce. Specifically, an environment is needed in which multiple trusted “virtual networks” can simultaneously coexist. In addition, common services indigenous to the industry, such as baggage handling for airlines, could be offered as a saving to each subscribing member. These industry-specific services — “community-of-interest-networks” (COINS) — are evolving in every major industry. COINS are analogous to the concept of an exchange. For example, the New York Stock Exchange is an environment in which participating companies subscribe to a set of services that enable their securities to be traded safely and efficiently. 114

The CIO Role in the Era of Dislocation For all the same reasons that intranets were created (manageability, availability, performance, and security), exchanges will evolve across entire industries and reshape the mode and means of interenterprise commerce. Supply and sales chain participants within the same industry are agreeing on infrastructure and, in some noncompetitive areas, on data and transaction standards. In theory, duplicate infrastructure investments are eliminated and competitiveness becomes based on product/customer relationships. The automotive industry, for example, has cooperatively developed and implemented the ANX for all major original equipment manufacturers and (eventually) all suppliers. In addition, ANX will potentially include other automotive-related market segments, such as financial institutions, worldwide dealers, product development and research centers, and similar participants. Industries such as aerospace, pharmaceuticals, retail merchandising, textiles, consumer electronics, etc. will also embrace industry-specific exchanges. Unlike the public-accessible Internet, which is essentially free to users, exchanges are not free to participants. By agreement, subscription fees are required to support the infrastructure capable of providing service levels required for safe, effective, and efficient commerce. The new “global internet” or “information highway” (or whatever name is ultimately attached) will become an archipelago of networks, one of which is free and open (Internet) while the others are private industry and enterprise subscription networks. The resulting architecture is analogous to today’s television paradigm — free channels (public Internet), cable channels (industry-specific exchange), and pay-for-view channels (one of service, such as a video teleconference). Regardless of how this eventually occurs, intranets are predicted to forever change the internal operations of enterprises, and exchanges are predicted to change commerce among participants within an industry. Again, the CIO must provide the leadership for his or her firm to participate in this evolving environment. Pervasive Computing and Agent Technology The second dislocation is ubiquitous or pervasive computing. Andy Grove of Intel estimated that there would be 500 million computers by 2002. In most cases, today’s computers are physically located in fixed locations, in controlled environments, on desktops, and under airline seats. They are hardly “personal” in that they are usually away from where we are, similar to our automobiles. However, this is changing dramatically. There are already six billion pulsating noncomputer chips embedded in other objects, such as our cars, thermostats, and hotel door locks throughout the world. Called “jelly beans” by Kevin Kelly in his book Out of Control and New Rules for the New 115

ACHIEVING STRATEGIC IT ALIGNMENT Economy, these will explode to over ten billion by 2005. Also known as “bots,” these simple chips will become so inexpensive that they can affordably be attached to everything we use and even discarded along with the item when we are finished using it, such as clothing and perishables. Once the items we use in daily life become “smart” and are capable of “participating” in our daily lives, the age of personal computing will have arrived. Programmable objects or agents are the next technological dislocation. Although admittedly sounding futuristic and even a bit alarming, there is little doubt that technology will enable the interaction of “real objects” containing embedded processors in the very near future. Java, Jini, Java chip, and next-generation (real-time) operating systems are enabling information collection and processing to be embedded within the “real-life” objects. For example, a contemporary automobile contains between 40 and 70 microprocessors performing a vast array of monitoring, control, guidance, and driver information services. Coupled with initiatives for intelligent highway systems (ITS), the next-generation vehicles will become substantially safer, more convenient, more efficient, and more environmentally friendly than our current vehicles. This same scenario is also true of our homes, transportation systems, communications systems (cellular phones), and even our children and persons. Every physical object we encounter or employ within our lifestyles can be represented by a software entity embedded within the object or representing the object as its “agent.” Behavioral response to recognizable stimuli can be “programmed” into these embedded processors to serve as our “agents” (e.g., light switches that sense the absence of people in the room and turn off to save energy and automobiles that sense other automobiles or objects in our path and warn or even take evasive actions). Many other types of agents perform a plethora of other routine tasks that are not specific to particular objects, such as searching databases for information of interest to the reader. The miniaturization of processors (jelly beans), network programming languages (Java), network connectivity (Jini), and appliance manufacturers’ commitment will propel this new era to heights yet unknown. Fixed process systems will be replaced by self-aligning systems enabled by agent technology. These phenomena will not occur naturally but, rather, must be directed as carefully as all other corporate resources. In my judgment, this is the role of the 21st-century CIO. SUMMARY In summary, the Internet has helped launch the information age and has become the harbinger for the concepts and structure that will enable international communication, collaboration, and knowledge access for commerce and personal growth. Although the Internet is not a universal solution to all commerce needs, it has, in an exemplary manner, established the 116

The CIO Role in the Era of Dislocation direction for the global information utility. It will remain an ever-expanding and vibrant source for information, personal communication, and individual consumer retailing. Intranets, developed by enterprises, are reshaping the manner in which all companies will structure themselves for the challenging and perilous journey in the 21st century. Complete industries will share a common exchange infrastructure for exchanging information among their supply, demand, product, and management support chains. Pervasive computing will emerge with thunder and lightning over the next few years and offer a dazzling array of products that will profoundly enrich our standard of living. Agent technology coupled with embedded intelligence in ten billion processors will enable self-aligning processes that adapt to existing environmental conditions. CIOs who focus on the business opportunities afforded by dislocating information technologies will be the ones who succeed. Even if the CIO title changes in the future, an officer of the company must provide leadership in navigating the company through technological transitions or dislocations. In this new millennium, however, there is a lot of work to be done to create the environment discussed in this chapter. As Kevin Kelly observes: “…wealth in this new regime flows directly from innovation, not optimization: that is, wealth is not gained by perfecting the known, but by imperfectly seizing the unknown.”

Successful CIOs will adopt this advice as their credo. Recommended Reading Christensen, C. 1997. The innovator’s dilemma: When new technologies cause great firms to fail. Boston: Harvard Business School Press. Drucker, P. 1994. Introduction. In Techno vision, edited by C. Wang. New York: McGraw-Hill. Gates, B. 1999. Business @ the Speed of Thought, New York: Warner Books. Kelly, K. 1997. The new rules for the new economy — twelve dependable principles for thriving in a turbulent world. Wired, September, 140. Schlender, B. 1999. E-business according to Gates. Fortune, April 12.

117

This page intentionally left blank

Chapter 10

Leadership Development: The Role of the CIO Barton S. Bolton

A successful CIO will always leave a legacy upon leaving an organization. What he or she will be remembered for will not be the applications portfolio, or the beloved infrastructure, or even the security plan. It will be the people and the organization left behind that will represent the real accomplishments of the CIO. It will be that legacy which will make the CIO a “Level 5” leader. Per Jim Collins (2001), a Level 5 leader is one whose organization continues to perform at an extraordinary level even after he or she has left. Put another way, you do not develop the organization; you develop the people and the people develop the organization. It is done successfully no other way. To get there, the CIO must serve as a role model by first understanding his or her “leadership style” and then understand the difference between leadership and management, and when to apply each one. Then, the CIO needs to develop leadership capability throughout all levels of the IT organization. Those capabilities are not just for the CIO’s direct reports and other IT managers, but also for the key individual contributors as they lead various projects, programs, and technical initiatives. WHAT IS A LEADER? So, what is this “thing” called a leader? In its simplest form, a leader is someone who has followers. A leader is found at all levels in an organization and operates very differently from a manager although he or she may have the title of manager. A good leader also knows when and how to be a follower, depending, of course, on the given situation. Let us also dispel some myths about leadership. Leaders are not born but can be created or developed. Having charisma helps but is not a nec0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

119

ACHIEVING STRATEGIC IT ALIGNMENT essary requirement. In some ways, having a personal preference of an extrovert helps, but many leaders, be they CIOs or CEOs, are basic introverts who have learned to be outward facing when they need to step into their roles as leaders. Perhaps the following quotes from leadership experts will help clarify and distinguish leaders from managers: “A leader is best when people barely know he exists, not so good when people obey and acclaim him, worse when they despise him. But of a good leader, who talks little, when his work is done, his aim fulfilled, they will say: We did it ourselves.” —Lao Tzu, The Art of War

“The first responsibility of a leader is to define reality. The last is to say thank you. In between the two, the leader must become a servant and a debtor…. A friend of mine characterized leaders simply like this: ‘Leaders don’t inflict pain; they bear pain.’” —Max DePree, Leadership Is an Art “People are led and things are managed.” —Steven Covey, Principle-Centered Leadership “When leadership is defined not as a position you hold but as a way of ‘being,’ you discover that you can lead from wherever you are.” —Rosamund Stone and Benjamin Zander, The Art of

Possibility What Is a Leadership Style? Once one accepts that there are differences between leadership and management, the next step is to discover one’s own leadership style. It varies from person to person, much like personality. Leadership style is not as structured as management style. And, of course, there is no “silver bullet” on becoming a leader. If there were, there would not be so many books published on the subject in the past several years. There are, however, seven essentials that serve as the foundation of everyone’s leadership style (see Exhibit 1). Every leader knows what he or she believes in, is good at, and what is most important to him or her. If you do not know who you are, how are you ever going to lead others? It is usually easier for someone who has had years of experience to understand what he or she is all about as the patterns of life are more obvious than for those of a younger person. However, one needs to search seriously for and understand one’s mission in life, which is based on the self-awareness that one has. “He who knows others is learned; he who knows himself is wise.” —Lao Tzu, The Art of War 120

Leadership Development: The Role of the CIO Exhibit 1.

Seven Essentials of Leadership Style

1. Self-awareness: who you are and what you are good at 2. Personal values: what you believe in and what is important to you 3. Integrity and character: how you operate 4. Care about people: genuine respect for others 5. Personal credibility: positive reputation and relationships 6. Holistic viewpoint: seeing the big picture 7. Continuous learning: constant personal growth

A leader’s ability to build and maintain good relationships is another major consideration. As Jim Kinney, retired CIO of Kraft Foods, has said: “Credibility is 80 percent relationships and 20 percent expertise.” Business relationships are based on such things as integrity and a real caring for other people. Good leaders, after all, depend on sound relationships for people to follow them. They do not demand respect — they earn it. The seven leadership essentials are augmented by various practices (see Exhibit 2). These may be the approach the leader takes in various situations or a personal viewpoint on a subject. Dealing with ambiguity, for example, is a tough challenge for many IT people, who are inclined to want everything answered with all “i’s” dotted and all “t’s” crossed. However, the business world is not that precise and not all decisions are made with total information being available. So, an effective leader learns to adapt to the situation and become comfortable in the so-called gray area. Exhibit 2. • • • •

Types of Leadership Practices

Ambiguity Cultures Ethics Creativity/innovation

• • • •

Empowerment Use of power Getting results Life balance

Leaders need to be sensitive to the culture in which they operate, as culture is often defined as “how decisions get made around here.” A truly effective leader understands that the role of the top executive is to set the culture for the IT organization, but to do so within the cultural norms of the enterprise. Of course, when the CIO is new to the organization, there is likely an existing, known culture. If change is required, the CIO must have an effective leadership practice to bring about such a change. Ethics is a subject on everyone’s mind today, given some of the corporate scandals in the news. Ethical practices are based on a combination of 121

ACHIEVING STRATEGIC IT ALIGNMENT personal values and societal norms. They clearly vary from country to country. They represent the boundaries in which we operate and define what is good versus bad or acceptable versus unacceptable behavior. Ethics always involve choices. Without ethics one most likely damages, if not destroys, one’s integrity. There are cases of unethical leaders, but one has to question how effective they were when judged by their results…or lack thereof. Many leaders are viewed as people with new ideas; leaders tend to go where others have not thought to go. Most entrepreneurs operate this way; they are not afraid of being innovative, and they depend upon their personal creativity. Most of these leaders like to build things and are not content with just running day-to-day operations. It is all part of the visions they have and the courage to pursue them. Effective leaders practice innovation and creativity to make a difference in whatever group they lead. Given that a leader sets a direction, aligns people in that direction, and motivates them to get the results, he or she must find ways to empower people; the power of the people, those who are the followers, must be unleashed. Because the leader genuinely cares about people, he or she establishes a trust with them — a mutual respect. The followers need to know they have the authority to make decisions and that making a mistake is acceptable, as long as it is not repeated. Empowerment of others to perform on the leader’s behalf is a risk the leader must take. Another practice of a leader is the judicious use of power. There is personal power, which is often based on one’s personal credibility and track record of meeting commitments. There is also positional power, which is a function of where the leader sits in the hierarchy. A third form or base of power is that of the organization, and how it is positioned in its industry and society. A good leader knows when to leverage any of these three forms of power. An effective leader probably depends most on his or her personal power and knows when, and when not, to use it. The overuse of power can diminish one’s credibility as it damages or destroys those vital relationships. The true test of leadership is the results achieved by the leader. All the visions and strategic thinking in the world will not mean anything if nothing is accomplished. Planning is good and necessary, but implementation is the key. The effective leader, using his or her leadership style and all its components, gets things done. It is getting results that really makes the difference. One of the most challenging practices for a leader is seeking and achieving balance in his or her life. It is not only a balance between work and family, but also the third dimension of self. It is a three-legged stool — work, family, you — that must be kept in balance. This kind of balance is not 122

Leadership Development: The Role of the CIO Exhibit 3. • • • •

Key Leadership Skills

Facilitating Team building Listening Project management

Exhibit 4.

• • • •

Change management Communicating Giving feedback Mentoring/coaching

Characteristics of the Leader’s Persona

• Passionate • Intelligent • Persistent

• Consistent • Energetic • Incisive

achieved by an equal amount of time (e.g., eight hours for each leg), but more from a personal set of priorities. There are times when work demands extra hours (e.g., a major systems implementation) and there are times when family gets the priority. We all know the phrase, “If I had one more day to live, it wouldn’t be in the office.” But there are times when one owes it to oneself to find that moment of silence and, of course, to maintain one’s health. Life balance is usually based on one’s personal values, along with understanding one’s priorities in life. Leadership style is further developed by adding skills to one’s toolkit. These are usually associated with training received at workshops or seminars. It is easy to list at least 20 such skills, but eight of the key ones are shown in Exhibit 3. The aspects of leadership style that cannot be taught but are seemingly more part of the persona can be viewed as characteristics. They appear to be more of the adjectives used to describe the leader. They represent how the leader is perceived. Again, the list can exceed 20 in number, but some of the more representative ones can be found in Exhibit 4. In summary, leadership style is based on seven essentials, augmented by key practices, enhanced by various skills, and modified by many characteristics. When totaled, there are some 60 ingredients that determine the style of a leader. The combinations are seemingly endless, which is why there is no “silver bullet” to becoming a leader and why it is a continuous learning process. THE CIO AS ROLE MODEL To be a successful leader, a CIO needs to first discover his or her own leadership style and then nurture or grow it. Developing your own style is a 123

ACHIEVING STRATEGIC IT ALIGNMENT continuous learning process, and much of your learning will come from being a role model for others — including mentoring or coaching people in your own organization. At the same time, the CIO needs to build a leadership development strategy and a supporting set of programs for the IT organization. Depending on the size and resources of the organization, these programs will most likely be a combination of internal and external learning experiences. They will take the form of various curricula that recognize that leadership development is not done in a one-week seminar, but rather in a series of educational forums over an extended period of time. What is learned in the forum is then applied or practiced on the job. Rotating the person in and out of various job assignments helps set up situations for actual practice. The initial targets for the leadership development in the IT organization should be the key potential leaders, be they part of the management group or individual contributors. Given that leadership capabilities are needed throughout the IT organization and based on the premise that the organization needs to grow its future leaders, both early career and middle career employees should be targeted for leadership development. This then becomes the legacy of the CIO: the building of an organization that knows how to both manage effectively and lead. References and Suggested Reading Buckingham, Marcus and Coffman, Curt, First, Break All the Rules, Simon & Schuster, New York, 1999. Collins, Jim, Good to Great, HarperCollins Publishers, New York, 2001. Covey, Stephen R., Principle-Centered Leadership, Simon & Schuster, New York, 1991. DePree, Max, Leadership Is an Art, Dell Publishing, New York, 1989. Kotter, John P., A Force for Change: How Leadership Differs from Management, The Free Press, New York, 1990. Sun Tzu (translated by Thomas Cleary), The Art of War, Shambhala Publications, Boston and London, 1988. Zander, Rosamund Stone and Zander, Benjamin, The Art of Possibility, Harvard Business School Press, Boston, 2000.

124

Chapter 11

Designing a Process-Based IT Organization Carol V. Brown Jeanne W. Ross

As we entered the new millennium, most Fortune 500 and many smaller U.S.-based firms had already invested heavily in a new way of competing: a cross-functional process orientation. Information technology (IT) was a strategic enabler of this new process focus, beginning with the first wave of enterprise systems: enterprise resource planning (ERP) system packages. This new process orientation across internal functions, as well as a new focus on processes with customers and suppliers as part of E-commerce and customer relationship management (CRM) implementations, has added a new complexity to organizational structures. Various governance forms (e.g., matrix, global) and horizontal mechanisms (e.g., new liaison roles and cross-functional teams) have been put in place to help the organization respond more quickly to marketplace changes. These business trends also raise the IT–business alignment issue of how to design an IT organization to effectively support a process-based firm. This chapter provides some answers to this question. Based on the vision and insights of IT leaders in a dozen U.S.-based Fortune 500 firms (see Appendix at the end of this chapter), we first present three organizational catalysts for becoming more process based. Then we summarize four high-level IT processes and six IT disciplines that capture the key thrusts of the new process-based IT organization designs. The chapter ends with a discussion of some organization design challenges faced by IT leaders as they repositioned their IT organizations, along with some specific solutions they initiated to address them. 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

125

ACHIEVING STRATEGIC IT ALIGNMENT Competitive Environment -

Globalization Mergers and Acquisitions E-Commerce Demanding Customers

Organizational Imperatives -

Cross-functional process integration Globalization of processes Business restructurings

IT Organization Core IT Processes

IT Disciplines

Exhibit 1. Catalysts for a Process-Based IT Organization

NEW ORGANIZATIONAL IMPERATIVES The process-based companies in our study all faced highly dynamic and competitive business environments, characterized by increased globalization and merger activities, new E-commerce threats, and demanding customers. Their business mandate was to become more process oriented in order to increase responsiveness while simultaneously attaining cost efficiencies. This resulted in new organizational initiatives involving multiple business units to design and implement common, enterprise-level processes. Described below are three organizational imperatives that all of the IT executives we interviewed were proactively addressing. These imperatives were the key catalysts for a rethinking of the role of IT and a redesign of the IT organization (see Exhibit 1). Cross-Functional Process Integration. Effective cross-functional process integration was believed to be key to achieving increased customer responsiveness and reduced cycle times. All 12 companies were focusing on a small set of cross-functional processes, including order fulfillment and material sourcing, for which IT was a critical integration enabler. Most companies were also de-emphasizing functional and geographic distinctions that were part of their structural history in order to focus on both a “single face to the customer” and “a single view of the customer.” Most participants viewed their own ERP implementations as critical to enabling this enterprise-level view and a new process orientation. Globalization of Processes. Although all 12 firms already had a global presence, they were striving to become more global in their processes. In particular, order fulfillment was increasingly being viewed as a global process. However, the extent to which a firm was pursuing a global process model and the extent to which regional customization was tolerated varied across the companies. For example, the consumer products firms were retaining a local flavor for their sales and marketing efforts. 126

Designing a Process-Based IT Organization Business Restructurings. Mergers, acquisitions, and divestitures can impose dramatic changes to a firm’s business model, and at least six of the twelve companies had recently experienced one or more of these strategic jolts. Many firms had aggressive growth goals that could only be achieved by merger and acquisition activities. In other firms, changing market conditions and new E-business opportunities had been the primary catalysts for restructuring. All 12 companies were looking for ways to quickly adapt to these types of changes, and common, standard processes were expected to enable faster business restructurings.

Exhibit 2. • • • •

Core IT Processes

Lead and enable strategic organizational change Creatively deliver IT solutions to strategic business needs Ensure totally reliable yet cost-effective infrastructure services Manage intellectual assets

CORE IT PROCESSES Given these organizational imperatives for a new process orientation, what are the core IT processes of an IT organization in a process-based firm? Four core IT processes (Exhibit 2) are critical for a proactive IT leadership role. Each is described below. Lead and Enable Strategic Organizational Change. New information technologies, particularly Web technologies and packaged software, have created new competitive challenges for organizations that lag in their abilities to implement IT-enabled business processes. The executives we interviewed felt that it was becoming increasingly important for the IT unit to be proactive in identifying how specific technologies could become strategically important to the business. Some noted that in the past, IT organizations had tended to wait for a business imperative to which new IT applications could be applied. In today’s more dynamic and competitive business environments, IT is viewed as a catalyst as well as a solution provider. As two CIOs described it:

“The CIO has to help the company recognize what’s coming and lead — become a visionary.” “The IT organization is propelling our business…driving the business forward.” Creatively Deliver IT Solutions to Strategic Business Needs. The responsibility for IT applications has shifted from a software development mind-set to an emphasis on delivering solutions — whether custom built, reused, insourced, or 127

ACHIEVING STRATEGIC IT ALIGNMENT outsourced. This requires identifying alternative approaches to solving strategic business needs. The IT unit is relied upon to assess the trade-offs and obtain the best IT fit in the shortest possible amount of time and at the lowest cost. At one firm, internal personnel provided only 25 percent of the IT services, so the processes to manage outsourcers who provisioned the remainder had become critically important. Ensure Totally Reliable yet Cost-Effective Infrastructure Services. An increased dependence on centralized databases for integrated global operations has placed an entirely new level of importance on network and data center operations. The criticality of world-class IT operations now rivals that of strategic IT applications. Although highly reliable, low-cost, 24/7 support has been important for several years, what is different is that the impact of a failure has significantly increased. Firms have become so dependent on IT that there is zero tolerance for downtime. One CIO described the responsibility as “end-to-end management of the environment.” Manage Intellectual Assets. As customers become more demanding and market conditions more dynamic, organizations need to leverage individual knowledge about how best to respond to these conditions. The participants expected to be increasingly relied upon to implement a knowledge management platform that both supports processes and provides user friendly tools: 1) processes for sharing best practices and 2) tools to capture, store, retrieve, and link to knowledge about products, services, and customers. One CIO emphasized the need to understand the flow of ideas across functions and the set of processes about which information is shared across businesses.

Exhibit 3. • • • • • •

Key IT Disciplines

Architecture design Program management Sourcing and alliances management Process analysis and design Change management IT human resource development

KEY IT DISCIPLINES Given the above core IT processes, what are the key disciplines, or capabilities, that an IT organization needs to develop? Six IT disciplines (Exhibit 3) are key to the effective performance of a process-based IT organization. Each is described below. (Note that this list is intended to identify critical high-level disciplines, not to be exhaustive.) 128

Designing a Process-Based IT Organization Architecture Design. An IT architecture specifies how the infrastructure will be built and maintained; it identifies where computing hardware, software, data, and expertise will be located. To address the complexities of highly distributed global firms requires a well-designed IT architecture that distinguishes global from local requirements, and enterprisewide from business unit and site requirements. Architectures model a firm’s current vision, structure, and core processes and define key system linkages. Architectures are a vehicle for helping the company “recognize what is coming” and leading the way. Program Management. Program management includes not just the traditional responsibilities of priority-setting and project management, but also the management of increasingly synergistic and evolutionary application solutions. Program managers are responsible for the coordination and integration of insourced and outsourced IT solutions. Several firms had “systems integration” and “release management” capabilities as part of their IT organization structures. Increased reliance on enterprise system packages and business partner solutions also results in application solutions that are expected to evolve via frequent upgrades. The initial implementation of an ERP solution, for example, is expected to be followed by many waves of opportunities for continuous improvement. Sourcing and Alliances Management. IT units are increasingly taking responsibility for negotiating and managing contracts with both internal business units and external alliances. Firms use service level agreements or other negotiated arrangements to ensure that business-unit priorities will be addressed cost-effectively. At the same time, corporate IT leaders are also managing outsourcers who provide global and local services. Some CIOs spoke of outsourcing all “commodity-based services,” including data center operations, help desk support, and network management. The new emphasis on external provisioning and ongoing alliances has heightened the need for a sourcing and alliances management capability. Some participants noted that they increasingly required contracts that detail expectations for knowledge transfer from external to internal resources. One IT leader mentioned the special challenge of renegotiating external alliances following a merger. Process Analysis and Design. As firms become more process based, they

require mechanisms for identifying, analyzing, storing, and communicating business processes. They also need to be able to identify when new technologies offer new opportunities to improve existing processes. Several participants noted that analysis and design expertise for cross-functional processes was now an explicit IT organization skillset. Process mapping 129

ACHIEVING STRATEGIC IT ALIGNMENT was being used, not only for business process redesign but also to ensure compliance with standard processes. Change Management. Because of the ongoing emphasis on process improvements and the implementation of new releases of packaged solutions, change management has become a key IT discipline. For example, continuous improvement projects to take advantage of new versions of enterprise system packages typically also involve changes in organizational processes, making a competence in change management a significant competitive advantage. One participant noted:

“We need to put something in, get value out of it, and replace it more or less painlessly.” IT Human Resource Development. Ensuring a high-quality pool of IT professionals with the skills needed for the above five disciplines is a critical discipline in its own right. IT leaders need to consistently provide opportunities for their workforce to renew technical skills, expand business understanding, and develop interpersonal skills. Global teams require IT professionals who can collaborate cross-functionally as well as cross-culturally. Internal consulting relationships with business units and external alliance relationships with vendors, implementation partners, and contractors demand that they recruit and develop an IT staff with strong interpersonal relationship-building skills. The need for IT professionals to remain committed to honing their technical skills as well as their business skills is as acute as ever. Some participants even emphasized that technical skills were only useful to the extent that they solved business problems, and that “language barriers” between IT and business units can still exist.

ORGANIZATION DESIGN CHALLENGES Summarized below are four major challenges faced by IT leaders as they forged a new kind of process-based IT organization, as well as some specific initial solutions to address them. Working under Complex Structures The evolution to a process-oriented organization adds an additional challenge to management decision making due to the addition of the process dimension. It also can result in more complex organizational structures. All 12 companies had introduced a variety of structures and mechanisms to ensure that they “didn’t lose sight of” their new processes. Several firms had designated process executives to manage a newly consolidated, cross-functional process such as order fulfillment. In one firm in which the top management team wanted to leverage its processes across its strategic business units (SBUs), the leader of each major process was also the vice president of an SBU. 130

Designing a Process-Based IT Organization In some firms, functional business units were still “holding the money,” which can be a constraint to adopting more process-based IT solutions. One participant distinguished between firms in an early stage process-oriented organization versus a later stage, as follows: • Process-focused firms are in an early stage in which process management is the responsibility of senior executives. • Process-based firms are in a later stage in which process thinking has become more pervasive in the firm and the responsibilities for managing processes have been diffused throughout lower management levels. Essentially all of our participants supported the notion that alignment with a process-oriented firm was possible under various IT governance structures (centralized, federal, or hybrid), although some level of centralization was needed to support globalization. Several firms with IT organizations in their business units had increased enterprise-level accountability by creating a dual reporting relationship for the divisional IT heads: a solid-line report to the corporate CIO had been added. Some corporate IT units were organized around cross-functional business processes (aligned with enterprise system modules or process owners). One firm had appointed process stewards within the product-oriented business units to work with IT personnel on global business processes. Other corporate IT units were organized around core IT processes. Newer structural solutions were also being experimented with; at one company, a “two-in-the-box” (co-leadership) approach was used to ensure information flows across critical functions. One recently recentralized corporate IT unit had created three major structures. First, global development teams were aligned with process executives (e.g., materials management, customer fulfillment) or process councils (e.g., a four-person council for the global manufacturing processes). Second, global infrastructure service provider teams were aligned by technology (e.g., telecommunications). Third, IT capability leaders (e.g., systems integration, sourcing, and alliances management) were responsible for developing the common IT processes and IT skills needed by the global development teams and infrastructure service providers. Devising New Metrics Most of the participants were still using traditional metrics focused on operational efficiencies but were also gradually introducing new metrics to measure IT value to the business. Overall, the predominant view was that metrics should be “pervasive and cohesive.” 131

ACHIEVING STRATEGIC IT ALIGNMENT For example, formal service level agreements were being used to assess the services provided to business units in support of their processes. To ensure that IT staff were focused on key organizational processes and that IT priorities were aligned with organizational priorities, metrics to measure business impacts had also been implemented in some firms. At one firm, an IT investment that was intended to reduce cost of goods sold was being assessed by the change in cost of goods sold — although other business factors clearly influenced the achievement of the metric. To help assess the IT unit’s unique contribution, several firms were also involving business unit managers in the performance reviews of IT managers. As stated by one IT leader: “Metrics help firms become more process-based.” Making Coordination “Natural” Another new challenge was to make cross-unit coordination a “natural” activity. Enterprise systems can enable new cross-functional processes but “old ways of working” can create bottlenecks to achieving them. Although some cross-functional process integration can be achieved via formal lines of authority, other types of horizontal designs (e.g., liaison roles, crossfunctional councils) were also relied upon to address coordination and communication deficiencies. For example, one participant described “problems with hand-offs” between application and infrastructure groups: • Analysts who proposed projects and product dates did not always tap into infrastructure and capacity issues. • Developers preparing new applications for desktop workstations did not alert infrastructure teams to the new desktops specs required to run them. In some situations, teams or committees were used to promote coordination: • Periodic, multi-day meetings of geographically dispersed IT management team members in order to share “what everyone is doing” • Cross-functional, cross-process councils to set IT resource priorities • Centers of Excellence approaches to build and leverage valued IT competencies Building a New Mix of IT Expertise Finding IT professionals with the desired mix of competencies and skillsets is still a tough challenge. The need for a range of technical, interpersonal, business consulting, and problem-solving skillsets is not new but there is a new emphasis on finding people with combined skillsets. Among the skill shortages in IT organizations increasingly dependent on external vendor 132

Designing a Process-Based IT Organization solutions are “business technologists” who have a combination of business organization knowledge and package application skills. A combination of contract management knowledge and technology expertise was also increasingly important to “see through vendor hype.” Sourcing solutions included not only importing new talent, but also growing it from within. One firm used standardized methodologies as training tools, in much the same way that they have been used by consulting organizations: the methodology guided behavior until it was internalized by the IT staff member. CONCLUSION IT organizations in process-based firms no longer merely support the business — they are an integral part of the business. IT leaders therefore need to develop new sets of IT processes and disciplines to align the IT organization with the business. Some of the challenges to be addressed when evolving to a process-based IT organization include working under complex structures, devising new metrics, making coordination “natural,” and building a new mix of internal and external IT expertise. Related Readings Brown, C.V., “Horizontal Mechanisms under Differing IS Contexts,” MIS Quarterly, 23(3), 421–454, September 1999. Brown, C.V. and Sambamurthy, V., Repositioning the IT Organization to Enable Business Transformation, Pinnaflex, Cincinnati, OH, 1999. Feeney, D.F. and Wilcocks, L.P., “Core IS Capabilities for Exploiting Information Technology,” Sloan Management Review, 40, 9–21, Spring 1998. Rockart, J.F., Earl, M.J., and Ross, J.W., “Eight Imperatives for the New IT Organization,” Sloan Management Review, 38(1), 43–56, Fall 1996. Ross, J.W., Beath, C.M., and Goodhue, D.L., “Develop Long-Term Competitiveness through IT Assets,” Sloan Management Review, 38(1), 31–42, Fall 1996.

133

ACHIEVING STRATEGIC IT ALIGNMENT

Appendix We began our study by developing a short list of companies known to us in which top management was striving to develop a more process-based organization. All 12 companies were Fortune 500 global manufacturing firms headquartered in the United States, competing primarily in consumer products, healthcare, chemicals, and high-technology industries. All but one had implemented, or were in the process of implementing, an ERP system. Nine of the participants were corporate CIOs and three were direct reports to corporate CIOs. Two interviews were conducted on-site jointly by both authors, while the other ten interviews were conducted over the telephone by one of the two authors. Each interview lasted approximately one hour. To ensure a common framework and provide a consistent pattern of questions across the interviews, a one-page description of the study with the general questions of interest was provided to each participant in advance of the interview.

134

Chapter 12

Preparing for the Outsourcing Challenge N. Dean Meyer

The difference between fruitful partnerships with vendors and outsourcing nightmares is not simply how well you select the vendor and negotiate the deal. It has more to do with how you decide when to use vendors (vs. internal staff) and how well someone manages those vendors once you have hired them. To use vendors effectively, executives must see through the hype of sales pitches and the confusion of budget numbers to understand the fundamental trade-offs between vendors and internal staff and the unique value that each delivers. Our research shows that it requires healthy internal organizations, in which same-profession staff decide “make-vs.-buy” in a fact-based manner, case by case, day after day, and in which staff use their specialized expertise to manage vendors to internal standards of excellence. In other words, successful management of vendors starts with the effective management of internal staff. This thesis may be counterintuitive, because outsourcing is generally viewed as an alternative to internal staff. It differs from much of the “common wisdom” about outsourcing for the following reason: it is a perspective on outsourcing from someone who is not in the outsourcing business and who has no vested interest in selling outsourcing. It is written from the vantage of someone who has spent decades helping executives solve the problems of poorly performing organizations, including enhancing their partnerships with vendors. Excerpted and adapted from: Outsourcing: How to Make Vendors Work for Your Shareholders, copyright 1999 NDMA Publishing, Ridgefield, CT. 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

135

ACHIEVING STRATEGIC IT ALIGNMENT Recognizing that business executives’ interest in outsourcing often reflects frustration with internal IT operations, this chapter looks at the typical sources of dissatisfaction. Such a look leads to an understanding of what it takes to make internal service providers competitive alternatives to outsourcing, and how they can help a corporation get the best value from vendors. But, first, it examines vendors’ claims to put the alternative into perspective. CLAIMS AND REALITY Outsourcing vendors have promised dramatic cost savings, along with enhanced flexibility and the claim that line executives will have more time to focus on their core businesses. Although economies of scale can theoretically reduce costs, outsourcing vendors also introduce new costs, not the least of which is profits for their shareholders. Cost savings are typically real only when there are significant economies of scale that cross corporate boundaries. Similarly, the sought-after ability to shift fixed costs (such as people) to variable costs is diminished by vendors’ requirements for long-term contracts for basic services that provide them with stable revenues over time. Performance claims beyond costs are also suspect. For example, the improved client accountability for the use of services that comes from clear invoicing can usually be achieved at lower cost by improving internal accounting. Similarly, outsourcing vendors rarely have better access to new technologies as claimed. How often do you hear of technology vendors holding products back from the market simply to give an outsourcing customer an advantage? As Tom Peters and Robert Waterman said years ago, successful companies “stick to their knitting.”1 Vendors claim that outsourcing leaves business managers more time to focus on the company’s primary lines of business. But this is only true if the people who used to manage the outsourced function are transferred into other business units. On the other hand, if these managers are fired or transferred to the outsourcing vendor, there will be no more managers focusing on the “knitting” than before outsourcing. Moreover, managing outsourcing vendors is no easier (in fact, it may be more difficult) than managing internal staff. Contracts and legal interpretations are involved, and it is challenging to try to guide people when you do not write their performance appraisals. Our research reveals that, contrary to conventional wisdom, many executives pursue outsourcing with or without fundamental economic benefits. Their real motivation is dissatisfaction with internal service functions. 136

Preparing for the Outsourcing Challenge THE REAL MOTIVATION Our analysis shows that there are four main reasons why executives might be willing to pay more to replace internal service providers with external vendors: 1. Customer focus. Internal providers may not treat their clients as customers and may attempt to dictate corporate solutions or audit clients’ decisions. External providers, of course, recognize these clients as customers and work hard to please them. 2. Tailoring. Corporate staff may believe they only serve corporatewide objectives, as if “one size fits all.” Of course, every business unit has a unique mission and a unique role in strategy, and hence unique requirements. Outsourcers are quite pleased to tailor their products and services to what is unique about their customers (for a price). 3. Control over priorities. To get internal providers to do any work may require a convoluted project-approval process, sometimes even requiring justifications to an executive committee. In other cases, it requires begging the internal providers, who set their own priorities. With outsourcing, on the other hand, all it takes is money. You buy what you want, when you want, with no need for approvals other than that of your boss who gave you the money to spend on your business. 4. Response time. Sometimes, internal staff develop long backlogs, and acquiring their services requires waiting in line for an untenably long time. By contrast, outsourcers can be very responsive (as long as the customer pays for the needed resources). When internal service providers address these four concerns, outsourcing must compete on its own merits — that is, on fundamental economics. If there is any good that comes from the threat of “losing the business” to an outsourcing company, it is that a complacent staff department is forced to respond to these legitimate concerns. The following sections discuss, first, a practical approach to improving the performance of an internal service function; and, second, methods needed to make fair service and cost comparisons between internal staff and outsourcing vendors. BUILDING COMPETITIVE INTERNAL SERVICE ORGANIZATIONS To improve internal service performance to competitive levels, the starting point is data collection. Client interviews and staff feedback reveal problems that need to be addressed. These symptoms provide impetus to change and guidance on what needs to be changed. 137

ACHIEVING STRATEGIC IT ALIGNMENT Next, it is vital to create a practical vision of the ideal organization. A useful way to approach this is to brainstorm answers to the following question: “What should be expected of a world-class service provider?” Examples include the following: • The provider is expected to designate an “account executive” for each business unit who is available to answer questions, participate in clients’ meetings, and facilitate their relationships with the function. • The provider is expected to proactively approach key opinion leaders and help them identify breakthrough opportunities for the function’s products in an unbiased, strategy-driven manner. • The provider is expected to proactively facilitate the formation and operation of consortia of clients with like needs. • The provider is expected to help clients plan and defend budgets to buy the function’s products. • In response to clients’ requests, the provider is expected to proactively offer a range of viable alternatives (as in Chevrolet, Cadillac, or Rolls Royce) and supply all the information clients need to choose. • Whenever possible, the provider is expected to design products using common components and standards to facilitate integration (without sacrificing its ability to tailor results to clients’ unique needs). • The provider is expected to assign to each project the right mix of skills and utilize a diversity of vendors whenever others offer more cost-effective solutions. Such a brainstorming stretches leaders’ thinking about what is expected of them, and builds a common vision of the organization they wish to build. These vision statements can also teach clients to demand more of their suppliers, internal and external. Clearly, when clients express interest in outsourcing, there is a good chance that they see it as a commodity rather than a core competence of the company. On the other hand, when its strategic value is appreciated, a function may be kept internal even if its costs are a bit higher. The price premium is more than repaid by the incremental strategic value that internal staff can contribute (and outside vendors cannot). Next, leaders assess the current performance of the organization against their vision. This reaffirms the need for change and identifies additional concerns to be addressed. A plan is then developed by analyzing the root causes of the concerns identified in the self-assessment and by identifying the right sequence of changes needed to build a high-performance organization that delivers visible strategic value.2 138

Preparing for the Outsourcing Challenge RESPONDING TO AN OUTSOURCING CHALLENGE It is generally difficult to compare an internal service provider’s budget to a vendor’s outsourcing proposal. This is the probably the greatest problem faced by internal service providers when they attempt to respond to an outsourcing challenge. There are two primary causes for this confusion: First, internal budgets are customarily presented in a fashion that makes it difficult to match costs to individual deliverables. Second, internal staff are generally funded to do things that external vendors do not have to (and should not) do. Budgeting by Deliverables Most internal budgets are presented in a manner that does not give clients an understanding of what they are buying. To permit a fair comparison of costs, an internal service provider must change the way it presents its budget. Consider a budget spreadsheet, where the columns represent cost factors such as salaries, travel expenses, professional development, etc. The rows represent deliverables (i.e., specific projects and services). Project 1 Project 2 Service 3 Service 4

Salaries $ $ $ $

Travel $ $ $ $

Training $ $ $ $

This sort of spreadsheet is a common, and sensible, way to develop a budget. The problem is, after filling in the cells in this spreadsheet, most organizations total the columns instead of the rows, presenting the budget in terms of cost factors. This, of course, invites the wrong kind of dialogue during the budget process. Executives debate the organization’s travel budget, micromanaging staff in a way that they never would an outsourcing vendor. Even worse, executives lose sight of the linkage between the organization’s budget and the deliverables they expect to receive during the year. They do not know what they are getting for their money, so the function seems expensive. At the same time, this approach leads clients to expect that they will get whatever they need within the given budget, making it the staff’s problem to figure out how to fulfill clients’ unlimited demands. Put simply, clients are led to expect infinite products and services for a fixed price! 139

ACHIEVING STRATEGIC IT ALIGNMENT Success in this situation is, of course, impossible. As hard as staff try, the internal service provider gets blamed for both high costs and unresponsiveness. Meanwhile, outsourcing vendors can offer bids that appear less costly simply by promising less. Executives have no way of knowing if the proposed level of service is comparable to what they are receiving internally. While vendors are generally quite clear about the deliverables within their proposed contracts, the internal organization’s deliverables remain undocumented. When comparing a short list of outsourced services to a long but undocumented list of internal services, the vendor may very well appear less expensive. Of course, comparing “apples to oranges” is quite misleading and unfair. The answer to this predicament is simply presenting the internal budget in a different way. The internal service provider should total the rows, not the columns. This is termed “budget by deliverables,” the opposite of budgeting by cost factors. With a budget presented in terms of deliverables, executives are often surprised to learn just how much an internal service provider is doing to earn its keep. Budget by deliverables permits a fair comparison of the cost of buying each product and service from internal staff vs. an outsourcing vendor. In many cases, clients learn that, although the vendor appears to be less expensive in total, it is offering fewer services and perhaps a lower quality of service than internal staff currently provide. It is unfortunate that it often takes an outsourcing challenge to motivate the consideration of a budget-by-deliverables approach, as it is broadly useful. One key benefit is that the debate during the budget process becomes much more constructive. Instead of demanding that staff do more with less, executives decide what products and services they will and won’t buy. Trimming the budget is driven by clients, not staff, and, as a result, is better linked to business priorities. Once a budget by deliverables is agreed on, another ongoing benefit is that clients understand exactly what they can expect from staff. Of course, if they want more, internal staff should willingly supply it — at an additional cost. This is one critical part of an “internal economy” that balances supply and demand. Recognizing Subsidies Staff do some activities for the common good (to benefit any and all clients). Because these deliverables are done on behalf of the entire firm, they are often “taken for granted” or not noticed at all by clients. Nonetheless, 140

Preparing for the Outsourcing Challenge these important “corporate good” activities must be funded. We call these “subsidies.” One example is the service of facilitating the development of corporate standards and policies. Another example of a subsidy activity is commodity-product research and advice (a “consumers’ report”). For example, in IS, staff may research the best configurations of personal computers for various uses. This research service helps clients make the right choices, whether they buy PCs through mail order or internal staff. Corporate-good activities should not be delegated to vendors who have different shareholders in mind. In a budget by deliverables, subsidies should be highlighted as separate rows. Their costs should not be buried within the price of other deliverables. If the costs of these services were spread across other internal products, they would inflate the price of the rest of staff’s product line and put them at an unfair disadvantage when compared to external competitors who do not do these things. In our IS example, if the costs of the PC research were buried in the price of PCs, then mail-order vendors would outcompete the internal IS department (even though staff’s bulk purchasing might negotiate an even better deal for the firm). As more clients bought directly from external vendors, the fixed costs of PC research would have to be spread across fewer units, and the price of a PC would rise further — chasing even more business away. Eventually, this drives the internal IS department out of the business of supplying PCs. This distortion is particularly critical during an outsourcing study. If subsidies are not separated from the cost of competitive products, the outsourcing vendor may win the business, even though its true unit costs may be higher. Later, the corporation will find that critical corporate-good activities do not get done. Funding subsidies individually separate the outsourcing decision (which focuses on ongoing products and services) from the decision to invest in the subsidies. It permits a fair comparison with competitors of the prices of specific products and services. It also encourages a thoughtful decision process around each such activity, leading to an appropriate level of corporate-good efforts. It is worth noting that once we compare “apples to apples” in this way, many internal service providers are found to offer a very competitive deal. Making sure that clients are aware of this is a key to a permanent role as “supplier of choice.” 141

ACHIEVING STRATEGIC IT ALIGNMENT Activity-Based Costing Analysis While the logic of budget by deliverables is straightforward and compelling, the mechanics are not so simple. Identifying an organization’s products and services — not tasks, but deliverables — is, in itself, a challenge. The level of detail must be carefully managed so that each row represents a meaningful client purchase decision, without inundating clients with more than they can comprehend. Once the products are identified, allocating myriad indirect costs to a specific set of deliverables is a challenge in “activity-based costing.” Many have found an activity-based costing analysis difficult for even one or two lines of business. To prepare a budget by deliverables requires a comprehensive analysis across all products and services. This adds unique problems, such as “circles,” where two groups within the organization serve one another, and hence each is part of the other’s cost structure and neither can determine its price until the other does. Fortunately, there is a step-by-step process that resolves such complications and leads to a clear result.3 The budget-by-deliverables process begins with the identification of lines of business and deliverables. For each deliverable, a unit of costing (such as hours or clients supported) is identified, and a forecast of the number of units required to produce the deliverable is made. Next, indirect costs are estimated and allocated to each row (each deliverable). Direct costs are added to each row as well. Overhead costs (initially their own rows) are “taxed” to the other deliverables. Then, all the groups within the organization combine their spreadsheets, and the total cost for each deliverable is summed. With minor modifications to the budget-by-deliverables process, the analysis can produce unit prices (fees for services) at the same time as the budget. The result of the budget-by-deliverables process is a proposal that estimates the true cost to shareholders of each staff deliverable, making for fact-based budgeting decisions and fair (and, hopefully, favorable) comparisons with outsourcing vendors’ proposals. VENDOR PRICING: LESSONS FROM THE PAST When comparing a staff’s budget with an outsourcing proposal, some additional considerations are important to note. Even if internal costs are lower than outsourcing, comparisons may be distorted by some common vendor tactics. Outsourcing vendors sometimes “buy the business” by offering favorable rates for the first few years of a contract and making up for the loss of profits throughout the rest of the relationship. 142

Preparing for the Outsourcing Challenge This tactic may be supported by excess capacity that the vendor can afford to sell (for a short while) below full costs. Or, the vendor may be generating sufficient profits from other companies to afford a loss for a certain period. Neither enabling factor lasts for long. In the long run, this leads to higher costs, even in discounted-present-value terms, because entrepreneurs will always find ways to be compensated for taking such risks. A similar technique is pricing basic services at or below costs, and then making up the profits on add-on business. Tricks that make outsourcing appear less expensive are best countered by demanding a comparison of all costs over a longer period. Costs should include all activities of the function, itemized in a way that permits comparisons under different scenarios, forecasting increased and decreased demands. The term need not be limited to the initial proposed contract. A longer time frame is justifiable, because an outsourcing decision is difficult to reverse. It takes years to rebuild an internal capability and transition competencies from a vendor back to staff. Thus, it makes sense to ask vendors to commit to prices for ten or more years. EXTENDED STAFFING Too often, if a staff function is not working right, outsourcing has simply been a method of paying someone else to take the pain. It is a way to avoid expending the time and energy to build a high-performance internal service provider. But paying profits to other shareholders is short-sighted because it sacrifices a potentially valuable component of business strategy and drives up long-term costs. Shirking tough leadership duties in this manner is also mean-spirited. It destroys careers, with little appreciation for people’s efforts and the obstacles they faced, and it creates an environment of fear and destroys morale for those who remain. Even partial outsourcing — sometimes buying from vendors and at other times from staff — is not constructive in the long term because it allows internal service groups to deteriorate. As they lose market share, internal organizations shrink, lose critical mass, and get worse and worse. There is no substitute for proper resolution of clients’ concerns by investing in building an internal service provider that earns clients’ business. This, of course, does not mean that outside vendors are avoided where they can contribute significant and unique value. In fact, one aspect of a healthy internal service provider is its proactive use of vendors. We call this approach to managing vendors extended staffing. 143

ACHIEVING STRATEGIC IT ALIGNMENT A healthy organization divides itself into clearly defined lines of business, each run by an entrepreneur. Each entrepreneur should know his or her competitors and continually benchmark price and performance against them. This is not tough to do. Those very competitors are also potential extensions to the internal staff. If demand goes up, every entrepreneur should have vendors and contractors lined up, ready to go. And whenever they bid a deal, staff should propose a “buy” alternative along side their “make” option. By treating vendors and contractors as extensions to internal staff — rather than replacements for them — extended staffing enhances, rather than undermines, internal service providers. And by bringing in vendors through (not around) internal staff, extended staffing gives employees a chance to learn and grow. When internal staff proactively use vendors, making educated decisions on when it makes sense to do so, the firm always gets the best deal. With confidence in their niche, ethical vendors are happy to compete for business on the merits of their products rather than attempt to replace internal staff with theirs. Furthermore, by using the people who best know the profession to manage vendors and contractors, extended staffing also ensures that external vendors live up to internal standards of excellence. Extended staffing automatically balances the many trade-offs between making and buying goods and services. It leads to the right decisions, in context, day after day. Notes 1. Peters, T.J. and Waterman Jr., R.H. 1982. In Search of Excellence. New York: Harper & Row. 2. A tested, effective process of systemic change is discussed in detail in Meyer, N. D. 1998. Road map: How to Understand, Diagnose, and Fix Your Organization. Ridgefield, CT: NDMA Publishing. 3. Meyer, N.D. 1998. The Internal Economy: A Market Approach. Ridgefield, CT: NDMA Publishing.

144

Chapter 13

Managing Information Systems Outsourcing S. Yvonne Scott

IS outsourcing is not a new trend. Today, it is a mature concept — a reality. The use of service bureaus, contract programmers, disaster recovery sites, data storage vendors, and value-added networks are all examples of outsourcing. However, outsourcing is not a transfer of responsibility. Tasks and duties can be delegated, but responsibility remains with the organization’s management. This chapter provides guidelines for effectively managing 15 outsourcing arrangements. OUTSOURCING AGREEMENTS Although it is desirable to build a business partnership with the outsource vendor, it is incumbent on the organization to ensure that the outsourcer is legally bound to take care of the company’s needs. Standard contracts are generally written to protect the originator (i.e., the vendor). Therefore, it is important to critically review these agreements and ensure that they are modified to include provisions that adequately address the following issues. Retention of Adequate Audit Rights It is not sufficient to generically specify that the client has the right to audit the vendor. If the specific rights are not detailed in the contract, the scope of a review may be subject to debate. To avoid this confusion and the time delays that it may cause, it is suggested that, at a minimum, the following specific rights be detailed in the contract:

0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

145

ACHIEVING STRATEGIC IT ALIGNMENT • Who can audit the outsourcer (i.e., client internal auditors, outsourcer internal auditors, independent auditors, user-controlled audit authority)? • What is subject to audit (e.g., vendor invoices, physical security, operating system security, communications costs, and disaster recovery tests)? • When the outsourcer can or cannot be audited. • Where the audit is to be conducted (e.g., at the outsourcer’s facility, remotely by communications). • How the audit is conducted (i.e., what tools and facilities are available). • Guaranteed access to the vendor’s records, including those that substantiate billing. • Read-only access to all of the client company’s data. • Assurance that audit software can be executed. • Access to documentation. • Long-term retention of vendor records to prevent destruction. Continuity of Operations and Timely Recovery The timeframes within which specified operations must be recovered, as well as each party’s responsibilities to facilitate the recovery, should be specified in the contract. In addition, the contract should specify the recourse that is available to the client, as well as who is responsible for the cost of carrying out any alternative action, should the outsourcer fail to comply with the contract requirements. Special consideration should be given to whether or not these requirements are reasonable and likely to be carried out successfully. Cost and Billing Verification Only those costs applicable to the client’s processing should be included in invoices. This issue is particularly important for those entering into outsourcing agreements that are not on a fixed-charge basis. Adequate documentation should be made available to allow the billed client to determine the appropriateness and accuracy of invoices. However, documentation is also important to those clients who enter into a fixed invoice arrangement. In such cases, knowing the actual cost incurred by the outsourcer allows the client to effectively negotiate a fair price when prices are open for renegotiation. It should also be noted that, although long-term fixed costs are beneficial in those cases in which costs and use continue to increase, they are equally detrimental in those situations in which costs and use are declining. Therefore, it is beneficial to include contract clauses that allow rates to be reviewed at specified intervals throughout the life of the contract, or in the event of a business downturn (e.g., sale of a division).

146

Managing Information Systems Outsourcing Security Administration Outsourcing may be used as an agent for change and, therefore, may represent an opportunity to enhance the security environment. In any case, decisions must be made regarding whether the administration (i.e., granting access to data) and the monitoring (i.e., violation reporting and followup) should be retained internally or delegated to the outsourcer. In making this decision, it is imperative that the company have confidence that it can maintain control over the determination of who should be granted access and in what capacity (e.g., read, write, delete, execute) to both its data and that of its customers. Confidentiality, Integrity, and Availability Care must be taken to ensure that both data and programs are kept confidential, retain their integrity, and are available when needed. These requirements are complicated when the systems are no longer under the physical control of the owning entity. In addition, the concerns that this situation poses are further compounded when applications are stored and executed on systems that are shared with other customers of the outsourcer. Of particular concern is the possibility that proprietary data and programs may be resident on the same physical devices as those of a competitor. Fortunately, technology has provided us with the ability to logically control and separate these environments with virtual machines (e.g., IBM’s Processor Resource/System Management). It should also be noted that the importance of confidentiality does not necessarily terminate with the vendor relationship. Therefore, it is important to obtain nondisclosure and noncompete agreements from the vendor as a means of protecting the company after the contract expires. Similarly, adequate data retention and destruction requirements must be specified. Program Change Control and Testing The policies and standards surrounding these functions should not be relaxed in the outsourced environment. These controls determine whether or not confidence can be placed in the integrity of the organization’s computer applications. Vendor Controls The physical security of the data center should meet the requirements set by the American Society for Industrial Security. In addition, there should be close compatibility between the vendor and the customer with regard to control standards.

147

ACHIEVING STRATEGIC IT ALIGNMENT Network Controls Because the network is only as secure as its weakest link, care must be taken to ensure that the network is adequately secured. It should be noted that dial-up capabilities and network monitors can be used to circumvent established controls. Therefore, even if the company’s operating data is not proprietary, measures should be taken to ensure that unauthorized users cannot gain access to the system. This should minimize the risks associated with unauthorized data, program modifications, and unauthorized use of company resources (e.g., computer time, phone lines). Personnel Measures should be taken to ensure that personnel standards are not relaxed after the function is turned over to a vendor. As was noted earlier, in many cases the same individuals who were employed by the company are hired by the vendor to service that contract. Provided these individuals are competent, this should not pose any concern. If, however, a reason cited for outsourcing is to improve the quality of personnel, this situation may not be acceptable. In addition, care should be taken to ensure that the client company is notified of any significant personnel changes, security awareness training is continued, and the client company is not held responsible should the vendor make promises (e.g., benefits, salary levels, job security) to the transitional employees that it does not subsequently keep. Vendor Stability To protect itself from the possibility that the vendor may withdraw from the business or the contract, it is imperative that the company maintain ownership of its programs and data. Otherwise, the client may experience an unexpected interruption in its ability to service its customers or the loss of proprietary information. Strategic Planning Because planning is integral to the success of any organization, this function should be performed by company employees. Although it may be necessary to include vendor representatives in these discussions, it is important to ensure that the company retains control over the use of IS in achieving its objectives. Because many of these contracts are long term and business climates often change, this requires that some flexibility be built into the agreement to allow for the expansion or contraction of IS resources. In addition to these specific areas, the following areas should also be addressed in the contract language: 148

Managing Information Systems Outsourcing • Definition and assignment of responsibilities • Performance requirements and the means by which compliance is measured • Recourse for nonperformance • Contract termination provisions and vendor support during any related migration to another vendor or in-house party • Warranties and limitations of liability • Vendor reporting requirements PROTECTIVE MEASURES DURING TRANSITION After it has been determined that the contractual agreement is in order, a third-party review should be performed to verify vender representations. After the contract has been signed and as functions are being moved from internal departments to the vendor, an organization can enhance the process by performing the following: • Meeting frequently with the vendor and employees • Involving users in the implementation • Developing transition teams and providing them with well-defined responsibilities, objectives, and target dates • Increasing security awareness programs for both management and employees • Considering a phased implementation that includes employee bonuses for phase completion • Providing outplacement services and severance pay to displaced employees CONTINUING PROTECTIVE MEASURES As the outsourcing relationship continues, the client should continue to take proactive measures to protect its interests. These measures may include continued security administration involvement, budget reviews, ongoing reviews and testing of environment changes, periodic audits and security reviews, and letters of agreement and supplements to the contract. Each of these client rights should be specified in the contract. In addition, a continuing review and control effort typically includes the following types of audit objectives: • Establishing the validity of billings • Evaluating system effectiveness and performance • Reviewing the integrity, confidentiality, and availability of programs and data • Verifying that adequate measures have been made to ensure continuity of operations • Reviewing the adequacy of the overall security environment • Determining the accuracy of program functionality 149

ACHIEVING STRATEGIC IT ALIGNMENT AUDIT ALTERNATIVES It should be noted that resource sharing (i.e., the sharing of common resources with other customers of the vendor) may lead to the vendor’s insistence that the audit rights of individual clients be limited. This may be reasonable. However, performance review by the internal audit group of the client is only one means of approaching the control requirement. The following alternative measures can be taken to ensure that adequate control can be maintained. • Internal reviews by the vendor. In this case, the outsourcing vendor’s own internal audit staff would perform the reviews and report their results to the customer base. Auditing costs are included in the price, the auditor is familiar with the operations, and it is less disruptive to the outsourcer’s operations. However, auditors are employees of the audited entity; this may limit independence and objectivity, and clients may not be able to dictate audit areas, scope, or timing. • External auditor or third-party review. These types of audits are normally performed by an independent accounting firm. This firm may or may not be the same firm that performs the annual audit of the vendor’s financial statements. In addition, the third-party reviewer may be hired by the client or the vendor. External auditors may be more independent than employees of the vendor. In addition, the client can negotiate for the ability to exercise some control over the selection of the third-party auditors and the audit areas, scope, and timing, and the cost can be shared among participating clients. The scope of external reviews, however, tends to be more general in nature than those performed by internal auditors. In addition, if the auditor is hired by the vendor, the perceived level of independence of the auditor may be impaired. If the auditor is hired by each individual client, the costs may be duplicated by each client and the duplicate effort may disrupt vendor operations. • User-controlled audit authority. The audit authority typically consists of a supervisory board comprising representatives from each participating client company, the vendor, and the vendor’s independent accounting firm and a staff comprising some permanent and temporary members who are assigned from each of the participating organizations. The staff then performs audits at the direction of the supervisor y b o a rd . I n a d d i t i o n , a c h a r t e r, d e t a i l i n g t h e r i g h t s a n d responsibilities of the user-controlled audit authority, should be developed and accepted by the participants before commissioning the first review. This approach to auditing the outsourcing vendor appears to combine the advantages and minimize the disadvantages previously discussed. In addition, this approach can benefit the vendor by providing a marketing 150

Managing Information Systems Outsourcing advantage, supporting its internal audit needs, and minimizing operational disruptions. CONCLUSION Outsourcing arrangements are as unique as those companies seeking outsourcing services. Although outsourcing implies that some control must be turned over to the vendor, many measures can be taken to maintain an acceptable control environment and adequate review. The guidelines discussed in this chapter should be combined with the client’s own objectives to develop individualized and effective control.

151

This page intentionally left blank

Chapter 14

Offshore Development: Building Relationships across International Boundaries Hamdah Davey Bridget Allgood

Outsourcing information systems (IS) has grown in popularity in recent years for a wide variety of reasons — it is perceived to offer cost advantages, provide access to a skilled pool of labor, increase staffing flexibility, and allow the company to concentrate on core competencies. These factors have challenged managers to rethink the way in which IS has been delivered within their organizations and to consider looking overseas for technical skills. Outsourcing IS work is not a new phenomenon; many aspects of IS have historically been outsourced. Traditionally, service bureaus dealt with payroll processing for companies. In recent times, hardware maintenance or user support is frequently provided by an external company. IS activities that can be easily separated and tightly specified are seen as ideal candidates for outsourcing. Systems development work is by its very nature more difficult to outsource because it is rarely a neat, tightly specified project. The continually changing business environment means that requirements refuse to stand still; they cannot simply be drawn up and

0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

153

ACHIEVING STRATEGIC IT ALIGNMENT thrown over the wall to software developers. An interactive development environment is needed, where business managers and users communicate with the development team, thus ensuring that systems that people want are being developed. OFFSHORE IS OUTSOURCING Offshore outsourcing has grown in popularity and is rapidly emerging in many countries such as India, Mexico, and Egypt. In particular, India is regarded as one of the major offshore outsourcing centers where private and government partnerships have proactively worked together to develop IT capability. To some, India may not seem to be the obvious country in which to locate technical computing expertise because of its struggles to provide basic sanitation, water, and electricity to many of its rural areas. Yet the reality is that the Indian government and the Indian software industry have worked together to create software technology parks (STPs) that provide high-speed satellite communication links, office space, computing resources, and government liaison. In this supportive environment, software houses have thrived and provide a variety of services that range from code maintenance and migration work, to designing and building new applications with the latest software technologies. The availability of a skilled pool of English-speaking IT developers with the latest technical knowledge, able to handle large projects and produce quality software, is attracting many major companies to outsource systems development projects to India. The cost of developer time in India is significantly less than market prices in the West, which makes offshore outsourcing particularly attractive. Below we describe the experiences of a U.K.-based company that outsourced system development work to India. CASE STUDY: IS OUTSOURCING This case example describes the company's experiences and answers key IS outsourcing questions such as: • “What factors led you to make the decision to outsource systems development work to India?” • “How was a relationship built and maintained across international borders?” • “What cross-cultural issues were evident?” Driving for a Change in Direction LEX Vehicle Leasing is one of the largest vehicle contract hire companies in the United Kingdom, managing more than 98,000 vehicles. LEX specializes 154

Offshore Development in buying, running, and disposing of company vehicles for its customers. The concept of outsourcing was not new to LEX. Hardware maintenance, network maintenance, and desktop hardware support had been outsourced to U.K. suppliers for some time. In 1997, LEX signed a contract to outsource the systems development work of its contract leasing administration system to an India-based software house. This step was a new direction because at the time, LEX had no previous experience in outsourcing system development work offshore. The decision to outsource the work to India was based on considerations such as the need to develop a system quickly, the availability of skilled IT personnel in India, and substantial cost savings. Managing Projects and Relationships across International Boundaries Offshore outsourcing presents many challenges to business. Building effective client–supplier relationships and managing a project across national and cultural boundaries can be particularly difficult. From the beginning of the project, LEX was aware of the challenges it would face in this area and felt that it was crucial to feel comfortable with the outsourcing supplier. LEX staff visited the outsourcing vendor in India prior to signing a contract for the project so that they could gain insight into the organizational culture of the potential supplier to ensure that they would be able to work in partnership with the organization. LEX also wanted to maintain close control of the project and work closely with the supplier throughout the project. Alignment of Personal Qualities and Cultural Fit. A team from LEX visited the office in Bombay prior to drawing up the contract. During this visit, the LEX team had the opportunity to liaise and meet with the outsourcing supplier staff. The LEX team was comfortable with the skills of the outsourcing staff and the key personnel. They felt that the two organizations had shared values within their cultures, and the personalities of key personnel within the outsourcing company aligned well with the staff within LEX. “There is synergy between our companies. We both believe in developing our employees, keeping staff happy, and also delivering substantial profit.”

The Indian outsourcing supplier was enthusiastic and keen to take on the work. They were perceived to be a quality company with a positive customer service culture. One LEX manager commented: “To delight customers is an attractive trait sadly lacking in U.K. software houses.” Project Management and Control. The systems development project outsourced to India was highly structured with fully defined systems requirements. Although interaction between users and designers was acknowledged as being needed, the project was felt to be fairly tightly specified. 155

ACHIEVING STRATEGIC IT ALIGNMENT LEX was keen to be closely involved with the day-to-day running of the project because one of the biggest worries in offshore outsourcing is managing the project from afar. The outsourcing supplier encouraged close involvement; this approach compared favorably with the company's experiences with some U.K.-based software houses that preferred LEX to take a hands-off approach. LEX was aware of the importance of effective communication between all staff involved with the project and took steps to encourage effective communication at all levels. Cultural differences had a significant impact on the project, and sensitive management was needed when handling situations. Working in close cooperation was seen as important to ensure the success of the project. However, building strong relationships takes time, and early on in the project both parties found it difficult to discuss concerns that arose. One manager noted a “them and us” attitude at the beginning of the project. With an emphasis on good communication, and building and maintaining close relationships at all levels, strong relationships did develop over time. Also, neither party resorted “to the contract” when there were disputes or problems. Steps to Ensure Effective Communication A number of initiatives were implemented to facilitate effective communication between LEX and the outsourcing supplier during this project: • The importance of face-to-face meetings was recognized throughout the project. Meetings were arranged in the United Kingdom between the users and the Indian staff at the beginning of the project. After the initial meetings, some of the Indian designers remained in the United Kingdom, while others returned to India to lead teams. When coding was completed, users from LEX went to India to undertake acceptance testing. • New communications technology, such as computer conferencing, email, and a centralized store of documents relating to the project, was used. E-mail was used extensively as a communication medium; the project manager established communication procedures to manage the high volume of e-mails between India and the United Kingdom. For example, if a change was made to a program, an automatic message would be sent to all parties concerned. • To further improve communication, LEX formed a team of ten business users to act as a link with the Indian employees. The Indian developers had ready access to these users and channeled their problems to them. This enabled the Indian designers to resolve issues effectively and save time. 156

Offshore Development • LEX appointed a full-time project manager who was culturally sensitive and very aware of “people” issues. This manager had responsibility for managing the outsourcing arrangement. Whether they were offshore in Bombay or onshore in the United Kingdom, the Indian employees worked for the LEX project manager. This arrangement meant that the project was able to respond quickly to changing business needs. This was viewed as a major success factor in the project. Steps to Address Cultural Differences Recognizing the complexity of the human element of the project due to cultural differences is important. Unforeseen problems, misunderstandings, and incorrect assumptions occurred due to cultural differences, and sensitivity was required when dealing with such issues. • The LEX management found that the Indian staff had a very positive attitude and were very committed to the project. The Indian staff were also flexible and generally quick to pick up ideas. • At the beginning of the project, a team of Indian designers was brought to the United Kingdom for a period of time so that they could meet and work with the LEX users. Many of the Indian employees had never been outside India and suffered culture shock when they first arrived in the United Kingdom, needing time to adapt to the British culture. • Although the Indian staff spoke English, they experienced difficulties in interacting with the LEX employees at the beginning of the project. This was because of variations in pronunciation and differences in the meanings of words. There was an expectation from the U.K. users that, when the Indian staff arrived, they could explain the features they wanted in the system and be understood. But this was not the case, partly because of the business language used (e.g., bank mandates and contract payment schedules). Also, the concepts of company cars and leasing vehicles did not exist in India. The Indian staff, therefore, faced a steep learning curve in familiarizing themselves with both the business and the business terminology. • The Indiana culture places a great importance on the aspect of control. This led to some problems when working on the project because the Indian designers tended to defer to the project leader even for a minor decision. RECOMMENDED ACTIONS FOR SUCCESSFUL OFFSHORE OUTSOURCING Building and maintaining relationships across international boundaries can be very demanding, and good communication procedures and staff skills are needed to support effective offshore outsourcing. It needs to be appreciated that relationships take time to build and are particularly fragile during the introductory stages of a project. Cultural differences can 157

ACHIEVING STRATEGIC IT ALIGNMENT result in misunderstandings, frustrations, and incorrect assumptions being made. Both time and effort are required to develop and sustain effective, strong offshore outsourcing partnerships. The following actions are recommended for successful offshore outsourcing: • Careful consideration of outsourcing partners is crucial to ensure that they are an organization with which you a can build a relationship. • It is important that the outsourcing organization takes actions to ensure that appropriate channels for good quality communications are used. Although e-mail technology can play an important role, giving staff the opportunity to get to know one another through face-to-face meetings is important if strong relationships are to be built. • Close involvement in the management and day-to-day running of the project by the outsourcing organization is important because it ensures that problems that arise are dealt with effectively and that a partnership style of working together is achieved. • Sensitivity is needed when handling the various language and cultural issues that may emerge. This means that particular consideration should be given to the personal skills of the project manager and of all those coming into contact with the outsourcing supplier. THE FUTURE Offshore outsourcing offers many benefits to companies. Fruitful future opportunities lie with sustaining a high-quality partnership over time. The 30 Indian employees who worked on the LEX project have developed relationships with LEX staff, gained an understanding of LEX’s business practices, and are ideally positioned to maintain the existing system and to develop future systems. By working together over a period of time, trust and understanding between the parties develop, allowing the full benefits of offshore outsourcing to be realized.

158

Chapter 15

Application Service Providers Mahesh Raisinghani Mike Kwiatkowski

Application service providers (ASPs) have received a large amount of attention in the information technology (IT) service industry and the financial capital markets. A quote from Scott McNealy, CEO of Sun Microsystems, is typical of the enthusiasm industry leaders have for the ASP service model: “Five years from now, if you’re a CIO with a head for business, you won’t be buying computers anymore. You won’t buy software either. You’ll rent all your resources from a service provider.” Conceptually, ASPs have a great deal to offer customers. They can maintain applications such as e-mail, enterprise resource planning (ERP), and customer relationship management (CRM), while providing higher levels of service by utilizing economies of scale in order to provide a quality software product at a lower cost to the organization. The ASP value proposition has particular value to small- and mid-sized enterprises (SMEs) that do not possess the IT infrastructure, staff, or capital to purchase high-end, corporatewide applications such as SAP, PeopleSoft, or Siebel. The goal of ASPs is to enable customers to use mission-critical enterprise applications in a better, faster, and more cost-efficient manner. ROLE OF ASPS IN THE 21ST CENTURY ORGANIZATION Many business scholars have attempted to define the shape and structure of effective organizations in the future. Overall, they unanimously predict technology will dramatically change the delivery methods of products and services. Organizational structures such as the “networked organizations” and “virtual organizations” will prosper. Networked organizations will differ from traditional hierarchical organization in a few major ways. First, the structure will be more informal, flatter, 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

159

ACHIEVING STRATEGIC IT ALIGNMENT and loosely structured. Second, employees will be more empowered, treated as an asset, and their contributions will be measured based on how they function as a team. Finally, information will be shared and available both internally and externally to the organization. The ASP industry model can facilitate each of the major areas in the networked organization. Structurally, ASPs are a perfect fit for organizations that desire to become flatter and loosely structured because all of the IT staff overhead and data center infrastructure required to support the business is outsourced to the ASP. The increased need for employee empowerment to make decisions requires additional knowledge. This knowledge must be provided to employees through advanced information systems such as intelligent systems and the Internet. ASPs have the potential to provide the expert and intelligent systems required for supporting “knowledge workers.” Historically, these systems have been cost prohibitive to install and maintain. The ASP model lowers the cost by sharing the system with many users and capitalizing on economies of scale. Information sharing is more efficient via the ASP’s delivery method than that of traditional private networks or value-added networks. These networks required expensive lease lines and specialized telecommunications equipment for organizations to pass data and share information. In the ASP model, a business partner can access your systems via the Internet simply by pointing their browser to your ASP’s host site. The virtual corporation (VC) is an organizational structure that is gaining momentum in today’s economy, especially in the “E-economy” or world of electronic commerce. It can be defined as an organization composed of several business partners sharing costs and resources for the purpose of producing a product or service. Each partner brings strength to the organization such as creativity, market expertise, logistical knowledge, or low cost. The ASP model has a strong value proposition for this type of organization because it can provide the application expertise, technology, and knowledge at a lower cost. Exhibit 1 lists the major attributes of VCs and highlights how ASPs can fit the attribute and service the organization’s needs. The continued evolution of VCs presents ASPs with unique opportunities. The first is a greater need for messaging, collaboration software, and tools. Some ASPs are currently targeting the corporate e-mail application. Because partners of virtual corporations can be located anywhere, but will not relocate to join a VC, the need for interorganizational information systems will grow. Software vendors will need to address this need by developing systems that can be effectively utilized between organizations because currently most systems are primarily designed for single-firm use. Ease of integration and the use of Internet standards such as TCP/IP and 160

Application Service Providers Exhibit 1. Strengths of ASPs in the Virtual Corporation Attributes of Virtual Corporations (VCs)

Strength of ASPs

Excellence — Each partner brings its core competence and an all-star winning team is created. No single company can match what the virtual corporation can achieve. Utilization — Resources of the business partner are frequently underutilized or utilized in a merely satisfactory manner. In the virtual corporation, resources can be put to use more profitability, thus providing a competitive advantage. Opportunism — The partnership is opportunistic. A VC is organized to meet a market opportunity. Lack of borders — It is difficult to identify the boundaries of a virtual corporation; it redefines the traditional boundaries. For example, more cooperation among competitors, suppliers, and customers makes it difficult to determine where one company ends and another begins in the VC partnership. Trust — Business partners in a VC must be far more reliant on each other and require more trust than ever before. They share a sense of destiny.

By providing in-depth application expertise, technology experience, and the ability to provide high levels of service, the ASP organization is suited to deliver the technology excellence sought by VCs. The economies of scale that allow ASPs to provide low--cost service, require a high degree of system utilization; therefore they are incented to partner with VCs to ensure their resources are efficiently utilized.

Adaptability to change — The VC can adapt quickly to environmental changes in a given industry or market.

Technology — Information technology makes the VC possible. A networked information system is a must.

To capitalize on opportunities in the marketplace, VCs can utilize ASPs to implement required support systems quickly. The ASP business model is characterized by many business partnerships between software vendor, systems integrators, and infrastructure providers. VC partnerships can leverage a shared data center or shared application. Because costs are determined by a number of users, technology costs can be shared among partners and not owned by one firm in the VC. As organizations evolve into more trusting environments, this will lower some of the barriers to ASP adoption. ASPs must focus on maintaining good service levels to ensure VC customers continue to trust an ASP with valuable data and mission-critical systems. ASPs in today's marketplace are a constantly evolving duet to the uncertainty in the industry and the pace of technology changes. Successful ASP organizations will possess an innate ability to change and assist their customers in implementing technology rapidly. Because technology is a critical component and VCs do not want to build their own IT infrastructure, the ASP service delivery model or outsourcing is the only alternative. Additionally, the ASP model is a networked delivery system and therefore a perfect match for the VC.

161

ACHIEVING STRATEGIC IT ALIGNMENT XML by software vendors will allow ASPs to offer system access to many organizations in a secure and integrated environment. POTENTIAL IT MANAGEMENT ISSUES According to the Gartner Group, “The ASP model has emerged as one of the foremost global IT trends driving phenomenal growth in the delivery of applications services. Long term, this model will have a significant impact on IT service delivery and management.” IT organizations will have to deal with a variety of changes in the culture of the organization, make infrastructure improvements, and manage the people, technology, and business processes. Culture Changes With the increased adaptation of the ASP delivery model, IT organizations will need to adapt culturally to being less responsible for proving technology internally and begin embracing the concept that other organizations can provide a higher degree of value. Most IT professionals take a negative view of outsourcing because successful adaptation of this principle means fewer projects to manage, fewer staff members to hire, and a perception of a diminishing role in the organization. Self-preservation instincts of most IT managers will view this trend with negativism and skepticism. Therefore, increased trust of service providers and software companies will be required to effectively manage the ASP relationships of the future. For IT professionals to survive the future changes ASPs promise, they must evolve from programming and technical managers to vendor managers. Additionally, they should think strategically and help position their organization to embrace the competitive advantages an ASP can provide with packaged software and quick implementations. IT leaders will need to gain a better understanding of the business they support rather than implementing the latest and greatest technologies. With an increased understanding of the business drivers and the need for improvements in efficiency and customers service, IT practitioners must focus increasingly on business processes and understanding how the technology provided by an ASP can increase the firm’s competitive advantage. Reward and compensation programs should be modified whereby IT compensation plans are based on achieving business objectives rather than successful completion of programming efforts or systems integration projects. IT managers should work to improve communication skills because they are required to effectively interact within the VCs and the many partnerships they represent. 162

Application Service Providers Infrastructure Changes There are five major components of the information infrastructure. These components consist of computer hardware, general-purpose software, networks and communication facilities (including the Internet and intranets), databases, and information management personnel. Adaptation of the ASP delivery model will require changes in each of these areas. Typically, organizations deploy two types of computer hardware — desktop workstations and larger server devices — which run applications and support databases. The emerging ASP model will impact both types of hardware. First, desktop systems can become “thinner” because the processing and application logic is contained in the ASP’s data center. The desktop will also become increasingly standardized, with organizations taking more control of software loaded on the workstation to ensure interoperability with the applications provided via the Internet. Second, there will be a decreasing need to purchase servers or mainframe computing environments because the ASP will provide these services. Also, support staff and elaborate data center facilities will not be required in ASP adopters. General-purpose software such as transaction processing systems, departmental systems, and office automation systems will reside at the ASP and be accessed via the network. End-user knowledge of the system’s functionality will be required, however, and users will gain the knowledge through training provided by the ASP rather than the in-house application support staff. Programming modifications and changes will be reduced because the low customization approach of ASPs will force organizations to change business processes and map themselves to the application. While the expenditures in hardware and software dwindle, infrastructure investment will be focused on better communication networks. The next generation networks will be required to support many types of business applications such as voice, data, and imaging, as well as the convergence of data and voice over a single network. Network infrastructures must be flexible to accommodate new standards. People required to support these advanced networks will be in high demand and difficult to retain in-house. However, ASP providers are not likely to offer internal network support directly, but may partner with third-party firms to manage the internal networks of an organization. Traditional telephone and network equipment providers do offer turnkey solutions in an outsourced delivery model today. Perhaps the most critical piece of an organization’s infrastructure whose role is not fully defined in the ASP model is the role of data. The speed and direction of how this issue is addressed will be critical to the ASP industry’s future growth. Typically, data architecture was a centralized function along with communications and business architecture. XML will 163

ACHIEVING STRATEGIC IT ALIGNMENT be a factor in the adaptation of ASPs due to the improvements XML promises in easing the integration of different systems. This presents an integration issue for organizations to synthesize two different data architectures, namely, Internet data in XML format and legacy data. Another large concern for many corporations is the security of the data. Because an organization’s data is a source of strategic advantage, vital to the “core business,” many firms will not want to outsource the care and management of their data to an outside firm. Service Level Agreements Simply stated, the purpose of an a service level agreement (SLA) is to provide the user of the service with the information necessary to understand and use the contracted services. For SLAs to be effective, they must be able to: • • • •

Be measured and managed Be audited Be provided at an economic price Give maximum value to users of the services

Additionally, they should be structured to reward behavior instead of triggering penalties in the contract. By incorporating this philosophy, ASPs are able to generate additional revenue from providing superior service and are incentivized to remain customer focused. Given that SLAs must include factors such as measurability, manageability, and auditability, performance metrics must be defined or at least very well-understood. Comprehensive SLAs in any outsourcing relationship should attempt to define service levels by utilizing the five metrics presented below. 1. Availability: percentage of time the contracted services are actually accessible over a defined measure of time 2. Reliability: frequency with which the scheduled services are withdrawn or fail over a defined measurement period 3. Serviceability: extension of reliability that measures the duration of available time lost between the point of service failure and service reinstatement (e.g., 95 percent of network failure will be restored within 30 minutes of initial reporting) 4. Response: measures the time of delay between a demand for service and the subsequent reply; response time can be measured as turnaround time 5. User satisfaction: measure of perceived performance relative to expectation; satisfaction is often measured using a repeatable survey process to track changes over time 164

Application Service Providers Exhibit 2.

A Comparison of Internal and External SLAs

External SLAs

Internal SLAs

Terminology defined Legalized Responsibilities defined Service definition precise Processes defined Price rather than cost

Terminology is “understood” Not legalized Responsibilities defined Service definition not precise Processes understood Cost rather than price, if at all

Exhibit 3. Proposed Service Level Agreement Characteristics ASP “Best of” SLAs Terminology is “defined” when practical, and “understood” when not Not legalized Responsibilities defined Service definition precise; however, also measured by metrics such as user satisfaction and service response time Processes understood Price based on service levels attained rather than flat fees with penalties for nonperformance

As seen in Exhibit 2, external and internal SLAs have different characteristics. Internal SLAs are generally more flexible than external SLAs. Given that contract flexibility is a key concern of ASP adopters, provider organizations can differentiate themselves based on their ability to keep SLAs flexible while still providing the needed level of contract formality. By incorporating a combination of “best practices” from external and internal SLAs, the ASP can incorporate a “best of” approach, as shown in Exhibit 3. Further to create a partnership between the ASP and adopting organization, the ASP must focus on both the direct aspects of SLAs as well as the indirect aspects. In-house IT organizations are often successful service providers due to the additional value provided by a focus on the indirect aspects. These activities build and foster a relationship with the business. This businessfocused approach is critical in the IT service delivery model. Larson1 further defines the two types of SLAs as direct and indirect. Examples of factors that comprise the direct aspects of SLAs can be characterized as the IT functions that companies are seeking in their ASP relationship (see Exhibit 4). The ASP is either the primary provider of these services or the secondary provider based on the partnering relationships in place with infrastructure providers or service aggregators. Most ASPs are unique in that they must manage SLAs with both customers and suppliers. 165

ACHIEVING STRATEGIC IT ALIGNMENT Exhibit 4. Other Aspects of Service Level Agreements Direct Examples

Indirect Examples

Processing services Processing environments

Periodic status reviews or meetings Attendance at meetings to provide expert advice Performance reporting Testing or disaster recovery Maintenance of equipment in asset management Consulting on strategy and standards Service billing

Infrastructure services Infrastructure support Other support (i.e., help desk)

Source: Larson.1

In addition to the direct services an ASP will provide in an outsourcing agreement, they will be expected to provide other services or value to the business. The amount of expertise and indirect support should be addressed on a case-by-case basis. THE FUTURE OF THE ASP INDUSTRY The ASP industry future is heavily dependent on software vendors to: • • • •

Provide Internet-architectured applications Develop formal ASP distribution channel programs Refrain from competing against these channels Implement new licensing programs for the ASP customer

The evolution of contracts from a cost per user to a cost per minute service agreement is also hypothesized; however, an ASP organization will require an exceptionally large customer base to offer this type of billing program. Additionally, software packages must be flexible to allow the mass customization required and provide all the functionality many different types of organizations require. A large opportunity exists in other specialized business applications not addressed by the ERP and CRM vendors. ASPs that can partner with “best-of-breed” solutions for an industry, possess the industry experience, and have an existing distribution channel will succeed. CONCLUSION The ability to manage service levels is an important factor in determining successful ASPs as well as successful adopting organizations. All market segments of the ASP model are required to contract for service level agreements. For businesses to effectively utilize the low costs offered by ASP services, they must fully understand their business requirements or why they will pay ASPs. 166

Application Service Providers Editor’s Note: This chapter is based on two published articles by the authors: “The Future of Application Service Providers,” Information Strategy: The Executive’s Journal, Summer 2001, and “ASPs versus Outsourcing: A Comparison,” Enterprise Operations Management, August–September 2001.

References and Further Reading 1. Larson, Kent D., The Role of Service Level Agreements in IT Service Delivery, Information Management and Computer Security, June 3, 1998, pp. 128–132. 2. Caldwell, Bruce, Outsourcing Deals with Competition, Information Week, (735): 140, May 24, 1999. 3. Caldwell, Bruce, Revamped Outsourcing, Information Week, (731): 36, May 24, 1999. 4. Dean, Gary, ASPs: The Net’s Next Killer App, J.C. Bradford & Company, March 2000. 5. Gerwig, Kate, Business: The 8th Layer, Apps on Tap: Outsourcing hits the Web, NSW, September 1999. 6. Hurley, Margaret and Schaumann, Folker, KPMG Survey: The Outsourcing Decision, Information Management and Computer Security, May 4, 1997, pp. 126–132. 7. Internet Research Group, Infrastructure Application Service Providers, Los Altos, CA, 2000. 8. Johnson, G. and Scholes, K., Exploring Corporate Strategy: Text and Cases, Prentice-Hall, Hemel Hempstead, U.K. 9. Leong, Norvin, Applications Service Provider: A Market Overview, Internet Research Group, Los Altos, CA, 2000. 10. Lonsdale, C. and Cox, A., Outsourcing: Risks and Rewards, Supply Management, July 3, 1997, pp. 32–34. 11. Makris, Joanna, Hosting Services: Now Accepting Applications, Data Communications, March 21, 1999. 12. Mateyaschuk, Jennifer, Leave the Apps to US, Information Week, October 11, 1999. 13. McIvor, Ronan, A Practical Framework for Understanding the Outsourcing Process, Supply Chain Management: An International Journal, 5 (1), 2000, pp. 22–36. 14. PA Consulting Group, International Strategic Sourcing Survey 1996, London. 15. Porter, M.E., Competitive Advantage: Creating and Sustaining Superior Performance, Free Press, New York, 1985. 16. Prahald, C.K. and Hamel, G., The Core Competence of the Corporation, Harvard Business Review, July–August, pp. 79–91. 17. Teridan, R., ASP Trends: The ASP Model Moves Closer to Prime Time, Gartner Group Research Note, January 11, 2000. 18. Turban, E., McLean, E., and Wetherbe, J., Information Technology for Management: Making Connections for Strategic Advantage, John Wiley & Sons, New York, 1999. 19. Williamson, O.E., Markets and Hierarchies, Free Press, New York, 1975. 20. Williamson, O.E., The Economic Institutions of Capitalism: Firms, Markets, and Relational Contracting, Free Press, New York, 1985. 21. Yoon, K. P. and Naadimuthu, G., A Make-or-Buy Decision Analysis Involving Imprecise Data, International Journal of Operations and Production Management, 14(2), 1994, pp. 62–69.

167

This page intentionally left blank

Section 2

Designing and Operating an Enterprise Infrastructure

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE Issues related to the design, implementation, and maintenance of the IT infrastructure are important for every modern company and particularly challenging for larger enterprises. Very rapid technical change and increasing demands for connectivity to external systems make these tasks even more difficult. The purpose of this section is to help IS managers broaden and deepen their understanding of the following core issues: • • • • •

Managing a distributed computing environment Developing and maintaining the networking infrastructure Data warehousing Quality assurance and control Security and risk management

MANAGING A DISTRIBUTED COMPUTING ENVIRONMENT IT solutions are increasingly being built around core Internet technologies that enable the distribution of processing and data storage resources via standardized protocols. Effective management of distributed environments requires both careful technical infrastructure analysis and design, as well as appropriate allocation and structuring of organizational resources to support the chosen model of technology distribution. This section begins with three chapters that discuss the broad organizational and technical context in which modern distributed systems are managed. Chapter 16, “The New Enabling Role of the IT Infrastructure,” is based on 15 in-depth case studies on IT infrastructure development. The authors identify four key elements in the development of the IT infrastructure and three partnership processes that link the key elements together. The chapter emphasizes the importance of developing a well-defined architecture based on corporate strategy and maintaining an IT infrastructure that effectively supports organizational processes. The telecommunications and networking links that connect the geographically distant parts of an organization are critical elements of any IT infrastructure. Therefore, the tumultuous state of the telecommunications industry and the serious difficulties facing various communications equipment manufacturers and operators cause high levels of uncertainty for network users. Chapter 17, “U.S. Telecommunications Today,” provides an updated analysis of the telecommunications industry. The author also shares insights that will help managers avoid the potential pitfalls caused by the high levels of uncertainty within this critically important industry. The concept of IT infrastructure has broadened significantly in recent years because of the wide variety of devices (both mobile and stationary) that continuously communicate with each other. Computing has become truly ubiquitous, and wireless connectivity between mobile devices has increased the flexibility of configurations significantly. Chapter 18, “Infor170

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE mation Everywhere,” presents a model of the new “pervasive information environment” that helps us conceptualize, understand, and address the key challenges management faces when implementing effective utilization plans for this new environment. The chapter presents numerous examples to clarify the benefits that organizations and individuals can derive from the pervasive information environment. DEVELOPING AND MAINTAINING THE NETWORKING INFRASTRUCTURE The networking infrastructure of an organization is increasingly tightly integrated with the rest of the IT infrastructure. In addition, the telecommunications infrastructure for voice (and potentially video) communication is merging with the data networking infrastructure. A highly reliable networking infrastructure that provides adequate capacity without interruptions is, therefore, a vitally important element of any modern infrastructure. Chapter 19, “Designing and Provisioning an Enterprise Network,” provides a comprehensive overview of the process of redesigning an organizational network infrastructure or building one from scratch. It strongly emphasizes the importance of choosing a vendor and a design that provide the best possible return on investment and managing vendor relationships. Both internal and external users of an organization’s IT infrastructure are beginning to connect to it using wireless access technologies and, many times, these users are truly mobile users who require and want to utilize connectivity regardless of their physical location. Chapter 20, “The Promise of Mobile Internet: Personalized Services,” presents a framework for decision makers to identify new application opportunities offered by wireless access to the Internet. The chapter provides a comprehensive analysis of the differences between fixed-line and wireless access technologies from the perspective of their potential use in organizations. Chapter 21, “Virtual Private Networks with Quality of Service,” focuses on two of today’s fundamentally important networking technologies: (1) virtual private networks (VPNs), which make it possible to use public networking infrastructures (particularly the Internet) to implement secure private networks, and (2) quality of service (QoS) in the context of VPNs — that is, the prioritization of network traffic types based on their immediacy requirements. The convergence of voice, video, and data networks requires effective utilization of QoS mechanisms; although these technologies are still in early stages of development, gaining the full benefits of VPNs requires heeding attention to these issues. Data storage is one area where the integration needs between networking technologies and general IT infrastructure have expanded. Increasingly 171

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE often, storage capacity is separated from servers and implemented using either storage area networks (SANs) or network attached storage (NAS) devices. Chapter 22, “Storage Area Networks Meet Enterprise Data Networks,” provides the reader with an overview of SAN technologies and describes the factors that ensure effective integration of SANs with the rest of the organization’s networking and computing infrastructure. DATA WAREHOUSING Data warehousing has become an essential component of an enterprise information infrastructure for most large organizations, and many small and mid-sized companies have also found data warehousing to be an effective means to provide high-quality decision support data. At the same time, many organizations are struggling to find the best approach to implement data warehousing solutions. Chapter 23, “Data Warehousing Concepts and Strategies,” provides an introduction to data warehousing and covers issues related to the fundamental characteristics, design and construction, and organizational utilization. The effects that the introduction of Web technologies have had on data warehousing, as a foundation for modern decision support systems, are emphasized. The authors remind us that successful implementation entails not only technical issues, but also attention to a variety of organizational and managerial issues. In particular, both financial and personnel resources must be committed to the project, and sufficiently strong attention must be paid to the quality of data at the conceptual level. Data marts, which are scaled-down versions of data warehouses that have a narrower (often departmental) scope, can be an alternative to a fullscale data warehousing solution. Chapter 24, “Data Marts: Plan Big, Build Small,” discusses how data marts can be used by organizations as an initial step. This approach requires careful planning and a clear view of the eventual goal. When implemented well, data marts can be both a cost-effective and organizationally acceptable solution to providing organizational decision makers with high-quality decision support data. An excellent introduction to the differences between data marts and data warehouses, which can be used for educating business managers, is also provided. Data mining applications utilize data stored in data warehouses and transactional databases to identify previously unknown patterns and relationships. The introduction to data mining techniques and applications provided in Chapter 25, “Data Mining: Exploring the Corporate Asset,” convincingly shows why traditional, verification-based mechanisms are insufficient when analyzing very large databases. This chapter provides an excellent rationale for investing in the tools, knowledge, and skills required for successful data mining initiatives. 172

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE Chapter 26, “Data Conversion Fundamentals,” is a practical introduction to the process of data conversion and the decisions a group responsible for the conversion may face when an organization moves from one data management platform to another, develops a new application that uses historical data from an existing, incompatible system, or starts to build a data warehouse and wants to preserve old data. In addition to guiding the reader through a ten-step process, the author emphasizes the importance of ensuring data quality throughout the process and the potentially serious consequences if such issues are ignored — especially when moving data from transaction processing systems to a data warehouse. QUALITY ASSURANCE AND CONTROL These two chapters focus on a vitally important topic: mechanisms that are needed to manage the quality of an organization’s IT infrastructure and the services it provides. Chapter 27, “Service Level Management Links IT to the Business,” highlights the new set of challenges created when shifting focus from managing service levels for a technical infrastructure (e.g., servers and networks) to managing service levels for applications and user experience. The author points out the need to use both technical and perceptual measures when evaluating service quality. Defining and negotiating service level agreements (SLAs) is not easy, but it is essential that service level management and SLAs are closely linked to the business objectives of the organization. Information systems audits are an important management tool for maintaining high ethical and technical standards. Chapter 28, “Information Systems Audits: What’s in It for Executives?,” presents an overview of the IS audit function and its role in organizations. Traditionally, non-auditors have viewed an audit as a negative event that is “done to” an organizational unit. This chapter presents a more contemporary, value-added approach in which auditing is done in cooperation with business unit personnel and in support of the business unit’s quest for excellence in IS quality. At the same time, it is important to maintain the necessary independence of the audit function. The authors demonstrate how regular, successfully implemented IS audits lead to significant benefits for the entire organization. SECURITY AND RISK MANAGEMENT Infrastructure security is requiring more and more of managers’ time and attention as the awareness of the need for proactive management of security has grown. Chapter 29, “Cost-Effective IS Security via Dynamic Prevention and Protection,” emphasizes that even the best technologies are not enough to guarantee security, if the organization does not have mechanisms in place to dynamically identify new security threats and prevent them from being realized. Risk analysis, security policies, and security audits are all neces173

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE sary components of dynamic prevention and protection. However, they have to be implemented in a context that enables the adaptation of policies, technologies, and structures based on the changes in real and anticipated threats. The author suggests a method to help decision makers build approaches for their organizations that take into account the specific requirements of the environments in which they operate. In addition to protecting the IT infrastructure against external and internal intruders, recent events have made managers responsible for maintaining the IT infrastructure highly aware of the need for business continuity planning and implementation, that is, putting in place all the mechanisms that are needed to ensure continuous availability of an organization’s IT resources even in the event of a major catastrophe. Chapter 30, “Reengineering the Business Continuity Planning Process,” addresses the need to progress from a narrow traditional disaster recovery approach to a much broader business continuity planning approach. The author points out the importance of measuring the success of continuity planning and suggests that the balanced scorecard method be used for success metrics. The chapter also discusses several special issues related to Web-based systems with 24/7 availability requirements. This section ends with two chapters that focus on specific security topics. Chapter 31, “Wireless Security: Here We Go Again,” demonstrates how many of the issues related to securing various wireless network access methods are similar to issues that organizations faced when transitioning from mainframe systems to distributed architectures. The chapter provides a thorough review of wireless access technologies, the specific security issues associated with each of them, and the technologies that can be used to mitigate these risks. Finally, Chapter 32, “Understanding Intrusion Detection Systems,” is an introduction to a widely utilized set of technologies for detecting attacks against network resources and, in some cases, automatically responding to those attacks. The chapter categorizes the intrusion detection technologies, discusses them in the general context of organizational security technologies, and points out some of their most significant limitations.

174

Chapter 16

The New Enabling Role of the IT Infrastructure Jeanne W. Ross John F. Rockart

Recently, some large companies have made some very large investments in their information technology (IT) infrastructures. For example: • Citicorp invested over $750 million for a new global database system. • Dow Corning and most other Fortune 500 companies invested tens of millions of dollars or more to purchase and install enterprisewide resource planning systems. • Johnson & Johnson broke with tradition by committing corporate funds to help its individual operating companies acquire standard desktop equipment. • Statoil presented all 15,000 of its employees with a high-end computer for home or office use. At firms all over the world, senior executives in a broad cross-section of industries are investing their time and money to shore up corporate infrastructures. In the past, many of these same executives had, in effect, given their IT units a generous allowance and admonished them to spend it wisely. Now, in contrast, they are engaging in intense negotiations over network capabilities, data standards, IT architectures, and IT funding limits. The difficulty of assessing the value of an IT infrastructure, coupled with technical jargon and business uncertainties, has made these conversations uncomfortable for most executives, to say the least. But the recognition that global markets are creating enormous demands for increased information sharing within and across firms has led to the realization that a powerful, flexible IT infrastructure has become a prerequisite for doing business. 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

175

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE The capabilities built into an infrastructure can either limit or enhance a firm’s ability to respond to market conditions (Davenport and Linder, 1993). To target a firm’s strategic priorities, senior executives must shepherd the development of the infrastructure (Broadbent and Weill, 1997). Sadly, most senior executives do not feel qualified to do so. As one CEO described it: “I’ve been reading on IT, but I’m terrified. It’s the one area where I don’t feel competent.” New infrastructure technologies are enabling new organizational forms and, in the process, creating a competitive environment that increasingly demands both standardization for cost-effectiveness and customization for responsiveness. Most firms’ infrastructures are not capable of addressing these requirements. Accordingly, firms are ripping out their old infrastructures in an attempt to provide features such as fast networks, easily accessible data, integrated supply chain applications, and reliable desktop support. At the firms that appear to be weathering this transition most successfully, senior management is leading the charge. Over the past years, we have done in-depth studies of the development of the IT infrastructure at 15 major firms. We have examined their changing market conditions and business imperatives, and we have observed how they have recreated their IT infrastructures to meet these demands. This chapter reports on our observations and develops a framework for thinking about IT infrastructure development. It first defines IT infrastructure and its role in organizations. It then describes how some major corporations are planning, building, and leveraging new infrastructures. Finally, it describes the roles of senior, IT, and line managers in ensuring the development of a value-adding IT infrastructure. WHAT IS AN IT INFRASTRUCTURE? Traditionally, the IT infrastructure consisted primarily of an organization’s data center, which supported mainframe transaction processing (see Exhibit 1.) Effectiveness was assessed in terms of reliability and efficiency in processing transactions and storing vast amounts of data. Running a data center was not very mysterious, and most large organizations became good at it. Consequently, although the data center was mission critical at most large organizations, it was not strategic. Some companies, such as Frito-Lay (Mead and Linder, 1987) and Otis Elevator (McFarlan and Stoddard, 1986), benefited from a particularly clear vision of the value of this infrastructure and converted transaction processing data into decision-making information. But even these exemplary infrastructures supported traditional organizational structures, consolidating data for hierarchical decision-making purposes. IT infrastructures 176

The New Enabling Role of the IT Infrastructure

Exhibit 1. The Role of IT Infrastructure in Traditional Firms

in the data center era tended to reinforce existing organizational forms rather than enable entirely new ones. In the current distributed processing era, the IT infrastructure has become the set of IT services shared across business units (Broadbent and Weill, 1997). Typically, these services include mainframe processing, network management, messaging services, data management, and systems security. While still expected to deliver reliable, efficient transaction processing, the IT infrastructure must also deliver capabilities, such as facilitating intraorganizational communications, providing ready access to data, integrating business processes, and establishing customer linkages. Delivering capabilities through IT infrastructure is much more difficult than managing a data center. Part of the challenge is technological because many of the individual components are immature, making them both unreliable and difficult to integrate. The bigger challenge, however, is organizational, because process integration requires that individuals change how they do their jobs and, in most cases, how they think about them. CHANGING ORGANIZATIONAL FORMS AND THE ROLE OF INFRASTRUCTURE Historically, most organizations could be characterized as either centralized or decentralized in their organizational structures. While centralization and decentralization were viewed as essentially opposite organiza177

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE

Exhibit 2. Traditional Organizational Models

tional structures, they were, in fact, different manifestations of hierarchical structures in which decisions made at the top of the organization were carried out at lower levels (see Exhibit 2.) Decentralized organizations differed from centralized in that more decision making was pushed down the hierarchy but communication patterns were still vertical and decisions involving two business units were usually made at a higher level so that business units rarely recognized any interdependencies. Centralization and decentralization posed significant trade-offs in terms of their costs and benefits. Simply stated, centralization offered economies of scale while decentralization allowed firms to be more responsive to individual customers. Thus, the degree to which any firm was centralized or decentralized depended on which of these benefits offered the most value. As global markets have forced firms to speed up decision making and to simultaneously recognize both the global scope of their customers and their unique demands, firms have found it increasingly important to garner the benefits of both centralization and decentralization simultaneously. Johnson & Johnson and Schneider National demonstrate how firms are addressing this challenge. Johnson & Johnson For almost 100 years, Johnson & Johnson (J&J), a global consumer and healthcare company, achieved success as a decentralized firm (Ross, 1995a). Both J&J management and external analysts credited the autonomy of the firm’s approximately 160 operating companies with stimulating innovation and growth. In the late 1980s, however, top management 178

The New Enabling Role of the IT Infrastructure observed that a new breed of customer was emerging, and those customers had no patience for the multiple salespersons, invoices, and shipments characteristic of doing business with multiple J&J companies. For example, executives at Wal-Mart, the most powerful of the U.S. retailers, noted that J&J companies were sending as many as 17 different account representatives in a single month. In the future, Wal-Mart mandated, J&J should send just one. In response, J&J created customer teams to service each of its largest multi-business accounts. The teams consolidated data on sales, distribution, accounts receivable, and customer service from the operating companies and presented a single face to the customer. Initially, much of the reconciliation among the businesses required manipulating spreadsheets populated with manually entered data. Ultimately, it meant that J&J would introduce complex structural changes that would link its independent operating companies through franchise management, regional organizations, and market-focused umbrella companies. Schneider National In contrast, Schneider National, following deregulation of the U.S. trucking industry in 1980, relied on a highly centralized organizational structure to become one of the country’s most successful trucking companies. Schneider leveraged its efficient mainframe environment, innovative operations models, centralized databases, and, later, satellite tracking capabilities to provide its customers with on-time service at competitive prices. By the early 1990s, however, truckload delivery had become a commodity. Intense price competition convinced Schneider management that it would be increasingly difficult to grow sales and profits. Schneider responded by moving aggressively into third-party logistics, taking on the transportation management function of large manufacturing companies (Ross, 1995b). To succeed in this market, management recognized the need to organize around customer-focused teams where operating decisions were made at the customer interface. To make this work, Schneider installed some of its systems and people at customer sites, provided customer interface teams with powerful desktop machines to localize customer support, and increasingly bought services from competitors to meet the demands of its customers. Pressures toward Federalist Forms These two firms are rather dramatic examples of a phenomenon that most large firms are encountering. New customer demands and global competition require that business firms combine the cost efficiency and tight integration afforded by centralized structures with the creativity and customer 179

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE

Exhibit 3.

Federalist Organizational Model

intimacy afforded by decentralized structures. Consequently, many firms are adopting “federalist” structures (Handy, 1992) in which they push out much decision making to local sites. In federalist firms, individuals at the customer interface become accountable for meeting customer needs, while the corporate unit evolves to become the “core” rather than headquarters (see Exhibit 3.) The role of the core unit in these firms is to specify and develop the core competencies that enable the firm to foster a unique identity and generate economies of scale (Hamel and Prahalad, 1990; Stalk, Evans, and Shulman, 1992). Federalist firms require much more horizontal decision making to apply shared expertise to complex problems and to permit shared resources among interdependent business units (Quinn, 1992). Rather than relying on hierarchical processes to coordinate the interdependencies of teams, these firms utilize shared goals, dual reporting relationships, incentive systems that recognize competing objectives, and common processes (Handy, 1992). Management techniques such as these require greatly increased information sharing in organizations, and it is the IT infrastructure that is expected to enable the necessary information sharing. However, an edict to increase information sharing does not, in itself, enable effective horizontal processes. To ensure that investments in information technology generate the anticipated benefits, IT infrastructure must become a top management issue. 180

The New Enabling Role of the IT Infrastructure

Exhibit 4.

The IT Infrastructure Pyramid

ELEMENTS OF INFRASTRUCTURE MANAGEMENT At the firms in our study we observed four key elements in the design and implementation of the IT infrastructure: organizational systems and processes, infrastructure services, the IT architecture, and corporate strategy. These build on one another (as shown in Exhibit 4) such that corporate strategy provides the basis for establishment of the architecture while the architecture guides decisions on the infrastructure, which provides the foundation for the organizational systems and processes. Corporate Strategy The starting point for designing and implementing an effective infrastructure is the corporate strategy. This strategy defines the firm’s key competencies and how the firm will deliver them to customers. Many large, decentralized firms such as J&J have traditionally had general corporate strategies that defined a firm-wide mission and financial performance goals, but allowed individual business units to define their own strategies for meeting customer needs. In the global economy, these firms are focusing on developing firm-wide strategies for addressing global customer demands and responding to global competition.

181

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE For purposes of developing the IT infrastructure, senior management must have an absolutely clear vision of how the organization will deliver on its core competencies. General statements of financial and marketing goals do not provide the necessary precision to develop a blueprint for the foundation that will enable new organizational processes. The necessary vision is a process vision in which senior management actually “roughs out” the steps involved in key decision-making and operational processes. Based on a clear vision of how it would service customers, Federal Express developed its Powership product, which allows any customer — be it an individual or a major corporation — to electronically place and track an order. Similarly, JC Penney’s internal management support system evolved from a clear vision of the process by which store managers would make decisions about inventory and sales strategies. This process included an understanding of how individual store managers could learn from one another’s experiences. Such a clear vision of how the firm will function provides clear prescriptions for the IT infrastructure. A corporate strategy that articulates key processes is absolutely essential for designing an IT infrastructure because otherwise neither IT nor business management can define priorities. The vision peels back corporate complexities so that the infrastructure is built around simple, core processes. This peeling provides a solid foundation that can adapt to the dynamics of the business environment. Some firms have attempted to compensate for a lack of clarity in corporate goals by spending more money on their infrastructures. Rather than determining what kinds of communications they most need to enable, they invest in state-of-the-art technologies that should allow them to communicate with “anyone, anytime, anywhere.” Rather than determining what data standards are most crucial for meeting immediate customer needs, they attempt to design all-encompassing data models. This approach to infrastructure building is expensive and generally not fruitful. Money is not a good substitute for direction. IT Architecture The development of an IT architecture involves converting the corporate strategy into a technology plan. It defines both the key capabilities required from the technology infrastructure and the places where the technologies, the management responsibility, and the support will be located. Drawing on the vision of the core operating and decision-making processes, the IT architecture identifies what data must be standardized corporatewide and what will be standardized at a regional level. It then specifies where data will be located and how they will be accessed. Similarly, the 182

The New Enabling Role of the IT Infrastructure architecture differentiates between processes that must be standardized across locations and processes that must be integrated. The architecture debate is a critical one for most companies because the natural tendency, where needed capabilities are unclear, is to assume that extensive technology and data standards and firm-wide implementation of common systems will prepare the firm for any eventuality. In other words, standard setting serves as a substitute for architecture. Standards and common systems support many kinds of cross-business integration and provide economies of scale by permitting central support of technologies. However, unnecessary standards and common systems limit business unit flexibility, create resistance and possibly ill will during implementation, prove difficult to sustain, and are expensive to implement. The elaboration of the architecture should help firms distinguish between capabilities that are competitive necessities and those that offer strategic advantage. It guides decisions on trade-offs between reliability and state-of-the art, between function and cost, and between buying and building. Capabilities recognized as strategic are those for which a firm can justify using state-of-the-art technologies, de-emphasizing standards in favor of function, and building rather than buying. IT Infrastructure Although firms’ architectures are orderly plans of the capabilities that their infrastructures should provide, infrastructures themselves tend to be in a constant state of upheaval. At many firms, key elements of the IT infrastructure have been in place for 20 to 30 years. Part of the infrastructure rebuilding process is recognizing that the fast pace of business change means that such enduring infrastructure components will be less common. Architectures evolve slowly in response to major changes in business needs and technological capabilities, but infrastructures are implemented in pieces, with each change introducing the opportunity for more change. Moreover, because infrastructures are the base on which many individual systems are built, changes to the infrastructure often disrupt an uneasy equilibrium. For example, as firms implement enterprisewide systems, they often temporarily replace automated processes with manual processes (Ross, 1997a). They may need to construct temporary bridges between systems as they deliver individual pieces of large, integrated systems or foundation databases. Some organizations have tried to avoid the chaos created by temporary fixes by totally replacing big pieces of infrastructure at one time. But infrastructure implementations require time for organizational learning as the firm adapts to new capabilities. “Big bang” approaches to infrastructure implementations are extremely risky. Successful companies often rely on incremental changes to move them toward 183

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE their defined architectures, minimizing the number of major changes that they must absorb. For example, Travelers Property & Casualty grasped the value of incremental implementations while developing its object-oriented infrastructure. In attempting to reuse some early objects, developers sometimes had to reengineer existing objects because new applications clarified their conceptualizations. But developers at Travelers note that had they waited to develop objects until they had perfected the model, they never would have implemented anything (Ross, 1997c). Stopping, starting, and even backing up are part of the learning process inherent in building an infrastructure. Organizational Systems and Processes Traditionally, organizations viewed their key systems and processes from a functional perspective. Managers developed efficiencies and sought continuous improvement within the sales and marketing, manufacturing, and finance functions, and slack resources filled the gaps between the functions. New technological capabilities and global markets have emphasized three very different processes: (1) supply chain integration, (2) customer and supplier linkages, and (3) leveraging of organizational learning and experience. For many manufacturing firms, supply chain integration is the initial concern. To be competitive, they must remove the excess cost and time between the placement of an order and the delivery of the product and receipt of payment. The widespread purchase of all-encompassing enterprisewide resource planning (ERP) systems is testament to both the perceived importance of supply chain integration to these firms and the conviction that their existing infrastructures are inadequate. Supply chain integration requires a tight marriage between organizational processes and information systems. ERP provides the scaffolding for global integration, but a system cannot be implemented until management can describe the process apart from the technology. At the same time, firms are recognizing the emergence of new channels for doing business with both customers and suppliers. Where technology allows faster or better customer service, firms are innovating rapidly. Thus, being competitive means gaining enough organizational experience to be able to leverage such technologies as electronic data interchange and the World Wide Web, and sometimes even installing and supporting homegrown systems at customers’ sites. Finally, many firms are looking for ways to capture and leverage organizational learning. As distributed employees attempt to customize a firm’s core competencies for individual customers, they can increase their effectiveness if they can learn from the firm’s accumulated experiences. The 184

The New Enabling Role of the IT Infrastructure

Exhibit 5.

Partnership Processes in Infrastructure Development

technologies for storing and retrieving these experiences are at hand, but the processes for making that happen are still elusive. Firms that adapt and improve on these three processes can be expected to out-perform their competitors. It is clear that to do so will require a unique combination of a visionary senior management team, a proactive IT unit, and a resourceful workforce. Together they can iteratively build, evaluate, redesign, and enhance their processes and supporting systems. IMPLEMENTING AND SUSTAINING THE INFRASTRUCTURE It is clear that the top and bottom layers of the IT pyramid are primarily the responsibility of business managers, whereas the middle layers are the responsibility of IT managers. Three partnership processes provide the glue between the layers, as shown in Exhibit 5. Communication and Education The process of moving from a strategy to an IT architecture involves mutual education of senior business and IT managers. Traditional approaches to education, such as lectures, courses, conferences, and readings, are all useful. Most important, however, is that management sched185

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE ules IT-business contact time in which the focus of the discussion is business strategy and IT capability. For example, at Schneider Logistics, senior business managers meet formally with IT managers for two hours each week. This allows IT management to identify opportunities while senior management specifies priorities and targets IT resources accordingly. Thus, the IT architecture debate is a discussion among senior managers with insights and advice from the IT unit. Senior management articulates evolving strategies for organizational processes, whereas IT clarifies capabilities of the technologies. A key role of IT becomes one of explaining the potential costs of new capabilities. Typical return-on-investment computations are often not meaningful in discussions of infrastructure development, but senior managers need to know the size of an investment and the accompanying annual support costs for new capabilities before they commit to large infrastructure investments. To avoid getting bogged down in arguments over who would pay for new capabilities, some firms have made “speed-bump” investments. Texas Instruments (TI), for example, traditionally funded infrastructure by attaching the cost of incremental infrastructure requirements to the application development project that initiated the need. But when the corporate network proved inadequate for a host of capabilities, senior management separately funded the investment (Ross, 1997b). In this way, TI avoided the inherent delays that result from investing in infrastructure only when the business units can see specific benefits that warrant their individual votes in favor of additional corporate taxes. Technology Management Moving from the architecture to the infrastructure involves making technology choices. Senior managers need not be involved in discussions of the technologies themselves as long as they understand the approximate costs and risks of introducing new capabilities. Instead, core IT works with local IT or business liaisons who can discuss the implications of technology choices. Selecting specific technologies for the corporate infrastructure involves setting standards. Local IT staff must understand those choices so that they can, on the one hand, comply with standards and, on the other hand, communicate any negative impacts of those choices. Standards will necessarily limit the range of technologies that corporate IT will support. This enables the IT unit to develop expertise in key technologies and limits the costs of supporting the IT infrastructure. However, some business units have unique needs that corporate standards do not address. Negotiation between corporate and local IT managers should allow them to recognize when deviations from standards can enhance business unit operations without compromising corporatewide goals. IT units 186

The New Enabling Role of the IT Infrastructure that clearly understand their costs have an edge in managing technologies because they are able to discuss with business managers the value of adherence to standards and the trade-offs inherent in noncompliance (Ross, Vitale, and Beath, 1997). Process Redesign Although the infrastructure can enable new organizational forms and processes, the implementation of those new processes is dependent on the joint efforts of business unit and IT management. Successful process redesign demands that IT and business unit management share responsibility and accountability for such processes as implementing common systems, establishing appropriate customer linkages, defining requirements for knowledge management, and even supporting desktop technologies. The joint accountability is critical to successful implementation because the IT unit can only provide the tools. Business unit management needs to provide the vision and leadership for implementing the redesigned processes (Davenport, 1992). Many process changes are wrenching. In one firm we studied, autonomous general managers lost responsibility for manufacturing in order to enable global rationalization of production. Initially, these managers felt they had been demoted to sales managers. A fast-food firm closed the regional offices from which the firm had audited and supported local restaurants. Regional managers reorganized into cross-functional teams and, armed with portable computers, took to the road to spend their time visiting local sites. In these and other firms, changes rarely unfolded as expected. In most cases, major process changes take longer to implement, demand more resources, and encounter more resistance than management expects. IMPLICATIONS OF INFRASTRUCTURE REBUILDING We observed significant obstacles to organizations’ attempts to build IT infrastructures to enable new federalist structures. Most of the changes these firms were implementing involved some power shifts, which led to political resistance. Even more difficult to overcome, however, was the challenge of clarifying the firm’s strategic vision and defining IT priorities. This process proved to be highly iterative. Senior management would articulate a vision and then IT management would work through the apparent technological priorities that the strategy implied. IT could then estimate time, cost, and both capabilities and limitations. This would normally lead to an awareness that the strategy was not clear enough to formulate an IT architecture. When the organization had the necessary fortitude, management would continue to iterate the strategy and architecture, but most abandoned the task midstream and the IT unit was left trying to establish priorities and implement an architecture that lacked clear man187

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE agement support. This would lead either to expensive efforts to install an infrastructure that met all possible needs or to limited investment in infrastructure that was not strategically aligned with the business (Henderson and Venkatraman, 1993). Although it is difficult to hammer out a clear architecture based on corporate strategy and then incrementally install an IT infrastructure that supports redesigned organizational processes, the benefits appear to be worth the effort. At Travelers, the early adoption of an object environment has helped it retain a high-quality IT staff and allowed it to anticipate and respond to changing market opportunities. Johnson & Johnson’s development of a corporatewide infrastructure has allowed it to address global cost pressures and to respond to the demands of global customers. Senior management sponsorship of global systems implementations at Dow Corning has enabled the firm to meet due dates for implementation and anticipate potential process redesign. As firms look for opportunities to develop competitive advantage, they find it is rarely possible to do so through technological innovations (Clark, 1989). However, the firms in this study were attempting to develop infrastructures that positioned them to implement new processes faster and more cost effectively than their competitors. This kind of capability is valuable, rare, and difficult for competitors to imitate. Thus, it offers the potential for long-term competitive advantage (Collis and Montgomery, 1995). Rebuilding an infrastructure is a slow process. Firms that wait to see how others fare in their efforts may reduce their chances for having the opportunity to do so. Notes 1. Broadbent, M., and Weill, P. 1997. Management by maxim: How business and IT managers can create IT infrastructures. Sloan Management Review 38(3): 77–92. 2. Clark, K.B. 1989. What strategy can do for technology. Harvard Business Review (November–December): 94–98. 3. Collis, D.J. and Montgomery, C.A. 1995. Competing on resources: Strategy in the 1990s. Harvard Business Review, 73 (July-August): 118–129. 4. Davenport, T.H. 1992. Process Innovation: Reengineering Work Through Information Technology. Boston: Harvard Business School Press. 5. Davenport, T.H. and Linder, J. 1993. Information management infrastructure: The new competitive weapon? Ernst & Young Center for Business Innovation Working Paper CITA33. 6. Hamel, G. and Prahalad, C.K. 1990. The Core Competence of the Corporation, Harvard Business Review, 68 (May-June). 7. Handy, C. 1992. Balancing corporate power: A new federalist paper. Harvard Business Review, 70 (November-December): 59–72. 8. Henderson, J.C. and Venkatraman, N. 1993. Strategic alignment: Leveraging information technology for transforming organizations. IBM Systems Journal, 32(1): 4–16. 9. McFarlan, F.W. and Stoddard, D.B. 1986. Otisline. Harvard Business School Case No. 9-186304.

188

The New Enabling Role of the IT Infrastructure 10. Mead, M. and Linder, J. 1987. Frito-Lay, Inc.: A strategic transition. Harvard Business School Case No. 9-187-065. 11. Quinn, J.B. 1992. Intelligent Enterprise: A Knowledge and Service Paradigm for Industry. New York: Free Press. 12. Ross, J.W. 1995a. Johnson & Johnson: Building an infrastructure to support global operations. CISR Working Paper No. 283. 13. Ross, J. W. 1995b. Schneider National, Inc.: Building networks to add customer value. CISR Working Paper No. 285. 14. Ross, J.W. 1997a. Dow Corning: Business processes and information technology. CISR Working Paper No. 298. 15. Ross, J.W. 1997b. Texas Instruments: Service levels agreements and cultural change. CISR Working Paper No. 299. 16. Ross, J.W. 1997c. The Travelers: Building an object environment. CISR Working Paper No. 301. 17. Ross, J.W., Vitale, M.R., and Beath, C.M. 1997. The untapped potential of IT chargeback. CISR Working Paper No. 300. 18. Stalk, G., Evans, P., and Schulman, L.E. 1992. Competing on capabilities: The new rules of corporate strategy. Harvard Business Review, 70 (March-April): 57–69.

189

This page intentionally left blank

Chapter 17

U.S. Telecommunications Today Nicholas Economides

This chapter examines the current conditions in the U.S. telecommunications sector (i.e., October 2002). It examines the impact of technological and regulatory change on market structure and business strategy. Among others, it discusses the emergence and decline of the telecom bubble, the impact of digitization on pricing, and the emergence of Internet telephony. The chapter briefly examines the impact of the 1996 Telecommunications Act on market structure and strategy in conjunction with the history of regulation and antitrust intervention in the telecommunications sector. After discussing the impact of wireless and cable technologies, the chapter concludes by venturing into some short-term predictions. There is concern about the derailment of the implementation of the 1996 Act by the aggressive legal tactics of the entrenched monopolists (the local exchange carriers), and we point to the real danger that the intent of the U.S. Congress in passing the 1996 Act to promote competition in telecommunications will not be realized. The chapter also discusses the wave of mergers in the telecommunications and cable industries. INTRODUCTION Presently, the U.S. telecommunications sector is going through a revolutionary change. There are four reasons for this. The first reason is the rapid technological change in key inputs of telecommunications services and in complementary goods, which have reduced dramatically the costs of traditional services and have made many new services available at reasonable prices. Cost reductions have made feasible the World Wide Web (WWW) and the various multimedia applications that “live” on it. The second reason for the revolutionary change has been the sweeping digitization of the telecommunications and the related sectors. The underly0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

191

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE ing telecommunications technology has become digital. Moreover, the consumer and business telecommunications interfaces have become more versatile and closer to multifunction computers than to traditional telephones. Digitization and integration of telecommunications services with computers create significant business opportunities and impose significant pressure on traditional pricing structures, especially in voice telephony. The third reason for the current upheaval in the telecommunications sector was the passage of a major new law to govern telecommunications in the United States, the Telecommunications Act of 1996 (1996 Act). Telecommunications has been traditionally subject to a complicated federal and state regulatory structure. The 1996 Act attempted to adapt the regulatory structure to technological reality, but various legal challenges by the incumbents have thus far delayed, if not nullified, its impact. The fourth reason is the “bubble” in investment in telecommunications and of the valuation of telecommunications companies of the years 1997 to 2000 and the deflation of the bubble since late 2000. As one looks at the telecommunications sector in the Fall of 2002, one observes: • The collapse of prices of the long-distance (LD) sector, precipitating in the bankruptcy of WorldCom, the collapse of the stock prices of longdistance companies, and the voluntary divestiture of AT&T. This comes naturally, given the tremendous excess capacity in long distance from new carriers’ investment and from huge expansion of Internet backbones that are very close substitutes (in production) to traditional long distance. • The fast, but not fast enough, growth of the Internet. In terms of bits transferred, the Internet has been growing at 100 percent a year rather than 400 percent a year as was earlier predicted. As a result, huge excess capacities in Internet backbone and in long-distance transmission were created. The rush to invest in backbones created a huge expansion and then, once the final demand did not get realized, the collapse of the telecom equipment sector. • The bankruptcy of many entrants in local telecommunications, such as Covad. The reason for this was the failure of the implementation of the Telecommunications Act of 1996. • A wave of mergers and acquisitions. Before going into a detailed analysis, it is important to point out the major, long-run driving forces in U.S. telecommunications today. These include: • Dramatic reductions in the costs of transmission and switching • Digitization 192

U.S. Telecommunications Today • Restructuring of the regulatory environment through the implementation of the 1996 Telecommunications Act coming 12 years after the breakup of AT&T • Move of value from underlying services (such as transmission and switching) to the interface and content • Move toward multi-function programmable devices with programmable interfaces (such as computers) and away from single-function, nonprogrammable consumer devices (such as traditional telephone appliances) • Reallocation of the electromagnetic spectrum, allowing for new types of wireless competition • Interconnection and interoperability of interconnected networks; standardization of communications protocols • Network externalities and critical mass These forces have a number of consequences, including: • Increasing pressure for cost-based pricing of telecommunications services • Price arbitrage between services of the same time immediacy requirement • Increasing competition in long-distance services • The possibility of competition in local services • The emergence of Internet telephony as a major new telecommunications technology This short chapter touches on technological change and its implications in the next section. It first discusses the Telecommunications Act of 1996 and its implications, followed by a review of the impact of wireless and cable technologies. The chapter concludes with some predictions and short-term forecasts for the U.S. telecommunications sector. TECHNOLOGICAL CHANGE The past two decades have witnessed (1) dramatic reductions in costs of transmission through the use of technology; (2) reductions in costs of switching and information processing because of big reductions of costs of integrated circuits and computers; and (3) very significant improvements in software interfaces. Cost reductions and better interfaces have made feasible many data- and transmission-intensive services. These include many applications on the World Wide Web, which were dreamed of many years ago but only recently became economically feasible. The general trend in cost reductions has allowed for entry of more competitors in many components of the telecommunications network and an intensification of competition. Mandatory interconnection of public telecommunications networks and the use of common standards for intercon193

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE nection and interoperability have created a “network of networks,” that is, a web of interconnected networks. The open architecture of the network of networks allowed for entry of new competitors in markets for particular components, as well as in markets for integrated end-to-end services. Competition intensified in many, but not all markets. Digital Convergence and “Bit Arbitrage” Entry and competition were particularly helped by (1) the open architecture of the network and (2) its increasing digitization. Currently, all voice messages are digitized close to their origination and are carried in digital form over most of the network. Thus, the data and voice networks are one, with voice treated as data with specific time requirements. This has important implications for pricing and market structure. Digital bits (zeros or ones) traveling on the information highway can be parts of voice, still pictures, video, or of a database or other computer application, and they appear identical: “a bit is a bit is a bit.” However, because some demands are for real-time services while others are not, the saying that “a bit is a bit is a bit” is only correct among services that have the same index of time immediacy. Digitization implies arbitrage on the price of bit transmission among services that have the same time immediacy requirements. For example, voice telephony and video conferencing require real-time transmission and interaction. Digitization implies that the cost of transmission of voice is hundreds of times smaller than the cost of transmitting video of the same duration. This implies that if regulation-imposed price discrimination is eliminated, arbitrage on the price of bits will occur, leading to extremely low prices for services, such as voice, that use relatively very few bits. Even if price discrimination remains imposed by regulation, arbitrage in the cost and pricing of bits will lead to pressures for a de facto elimination of discrimination. This creates significant profit opportunities for the firms that are able to identify the arbitrage opportunities and exploit them. Internet Telephony Digitization of telecommunication services imposes price arbitrage on the bits of information carried by the telecommunications network, thus leading to the elimination of price discrimination between voice and data services. This can lead to dramatic reductions in the price of voice calls, thereby precipitating significant changes in market structure. These changes were first evident in the emergence of the Internet, a ubiquitous network of applications based on the TCP/IP protocol suite. Started as a text-based network for scientific communication, the Internet grew dramatically in the late 1980s and 1990s once not-text-only applications 194

U.S. Telecommunications Today became available.1 In 2001, the Internet reached 55 percent of U.S. households, while 60 percent of U.S. households had PCs. Of the U.S. households connected to the Internet, 90 percent used a dial-up connection and 10 percent reached the Internet through a broadband service, which provides at least eight times more bandwidth/speed than a dial-up connection. Of those connecting to the Internet with broadband, 63 percent used a cable modem connection, 36 percent used DSL, and 1 percent used a wireless connection. Internet-based telecommunications are based on packet switching. There are two modes of operation: (1) a time-delay mode in which there is a guarantee that the system will do whatever it can to deliver all packets; and (2) a real-time mode, in which packets can in fact be lost without possibility of recovery. Most telecommunications services do not have a real-time requirement, so applications that “live” on the Internet can easily accommodate them. For example, there are currently a number of companies that provide facsimile services on the Internet, where all or part of the transport of the fax takes place over the Internet. Although the Internet was not intended to be used in real-time telecommunications, despite the loss of packets, presently telecommunications companies use the Internet to complete ordinary voice telephone calls. Voice telecommunications service started on the Internet as a computer-to-computer call. As long as Internet telephony was confined to calls from a PC to a PC, it failed to take advantage of the huge network externalities of the public switched network (PSTN) and was just a hobby. About seven years ago, Internet telecommunications companies started offering termination of calls on the public switched network, thus taking advantage of the immense externalities of reaching anyone on the PSTN. In 1996, firms started offering Internet calling that originated and terminated on the public switched network, that is, from and to the regular customers’ phone appliances. These two transitions became possible with the introduction of PSTN–Internet interfaces and switches by Lucent and others. In 1998, Qwest and others started using Internet Protocol (IP) switching to carry telephone calls from and to the PSTN using their own network for long-distance transport as an intranet.2 Traditional telephony keeps a channel of fixed bandwidth open for the duration of a call. Internet calls are packet based. Because transmission is based on packet transport, IP telephony can more efficiently utilize bandwidth by varying in real-time the amount of it used by a call. But, because IP telephony utilizes the real-time mode of the Internet, there is no guarantee that all the packets of a voice transmission will arrive at the destination. Internet telephony providers use sophisticated voice sampling meth195

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE ods to decompose and reconstitute voice so that packet losses do not make a significant audible difference. Because such methods are by their nature imperfect, the quality and fidelity of an Internet call depends crucially on the percentage of packets that are lost in transmission and transport. This, in turn, depends on, among other factors, (1) the allocation of Internet bandwidth (pipeline) to the phone call, and (2) the number of times the message is transmitted.3 Because of these considerations, one expects that two types of Internet telephony will survive: the low-end quality, carried over the Internet, with packets lost and low fidelity; and a service of comparable quality with traditional long distance, carried on a company’s Intranet on the long-distance part. Internet-based telecommunications services pose a serious threat to traditional national and international long-distance service providers. In the traditional U.S. regulatory structure, a call originating from a computer to an Internet service provider (ISP) (or terminating from an ISP to a computer) is not charged an “access charge” by the local exchange carrier. This can lead to substantial savings to the consumer. The FCC, in its decision of February 25, 1999, muddles the waters by finding, on one hand, that “Internet traffic is intrinsically mixed and appears to be largely interstate in nature,” while, on the other hand, it validates the reciprocal compensation of ISPs which were made under the assumption that customer calls to ISPs are treated as local calls. If Internet calls are not classified as local calls, the price that most consumers will have to pay to make Internet calls would become a significant per-minute charge. Because it is difficult to distinguish between phone calls through the Internet and other Internet traffic, such pricing will either be unfeasible or will have to apply to other Internet traffic, thereby creating a threat to the fast growth of the Internet. In fact, one of the key reasons for Europe’s lag in Internet adoption is the fact the in most countries, unlike the United States, consumers are charged per minute for local calls. The increasing use of broadband connections is changing the model toward fixed monthly fees in Europe. THE TELECOMMUNICATIONS ACT OF 1996 AND ITS IMPACT Goals of the Act The Telecommunications Act of 1996 (the 1996 Act) attempted a major restructuring of the U.S. telecommunications sector. The 1996 Act will be judged favorably to the extent that it allows and facilitates the acquisition by consumers of the benefits of technological advances. Such a function requires the promotion of competition in all markets. This does not mean immediate and complete deregulation. Consumers must be protected from monopolistic abuses in some markets as long as such abuses are feasible 196

U.S. Telecommunications Today under the current market structure. Moreover, the regulatory framework must safeguard against firms exporting their monopoly power in other markets. In passing the Telecommunications Act of 1996, the U.S. Congress took radical steps to restructure U.S. telecommunications markets. These steps may result in very significant benefits to consumers of telecommunications services, telecommunications carriers, and telecommunications equipment manufacturers. But the degree of success of the 1996 Act depends crucially on its implementation through decisions of the Federal Communication Commission (FCC) and State Public Utility Commissions as well as the outcome of the various court challenges that these decisions face. The 1996 Act envisions a network of interconnected networks composed of complementary components and generally provides both competing and complementary services. The 1996 Act uses both structural and behavioral instruments to accomplish its goals. The Act attempts to reduce regulatory barriers to entry and competition. It outlaws artificial barriers to entry in local exchange markets, in its attempt to accomplish the maximum possible competition. Moreover, it mandates interconnection of telecommunications networks, unbundling, nondiscrimination, and costbased pricing of leased parts of the network, so that competitors can enter easily and compete component by component and service by service. The 1996 Act imposes conditions to ensure that de facto monopoly power is not exported to vertically related markets. Thus, the 1996 Act requires that competition be established in local markets before the incumbent local exchange carriers are allowed in long distance. The 1996 Act preserves subsidized local service to achieve “Universal Service,” but imposes the requirement that subsidization is transparent and that subsidies are raised in a competitively neutral manner. Thus, the 1996 Act leads the way to the elimination of subsidization of Universal Service through the traditional method of high access charges. The 1996 Act crystallized changes that had become necessary because of technological progress. Rapid technological change has always been the original cause of regulatory change. The radical transformation of the regulatory environment and market conditions that is presently taking place as a result of the 1996 Act is no exception. History Telecommunications has traditionally been a regulated sector of the U.S. economy. Regulation was imposed in the early part of this century and remains today in various parts of the sector.4 The main idea behind regulation was that it was necessary because the market for telecommunications 197

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE services was a natural monopoly, and therefore a second competitor would not survive. As early as 1900, it was clear that all telecommunications markets were not natural monopolies, as evidenced by the existence of more than one competing firm in many regional markets, prior to the absorption of most of them into the Bell System. Over time, it became clear that some markets that may have been natural monopolies in the past are not natural monopolies anymore, and that it is better to allow competition in those markets while keeping the rest regulated. The market for telecommunication services and for telecommunications equipment went through various stages of competitiveness since the invention of the telephone by Alexander Graham Bell. After a period of expansion and consolidation, by the 1920s, AT&T had an overwhelming majority of telephony exchanges and submitted to state regulation. Federal regulation was instituted by the 1934 Telecommunication Act, which established the Federal Communications Commission. Regulation of the U.S. telecommunications market was marked by two important antitrust lawsuits that the U.S. Department of Justice brought against AT&T. In the first one, United States v. Western Electric, filed in 1949, the U.S. Department of Justice (DoJ) claimed that the Bell Operating Companies practiced illegal exclusion by buying only from Western Electric, a part of the Bell System. The government sought a divestiture of Western Electric, but the case was settled in 1956 with AT&T agreeing not to enter the computer market but retaining ownership of Western Electric. The second major antitrust suit, United States v. AT&T, started in 1974. The government alleged that (1) AT&T’s relationship with Western Electric was illegal and (2) that AT&T monopolized the long-distance market. The DoJ sought divestiture of both manufacturing and long distance from local service. The case was settled by the Modified of Final Judgment (MFJ). This decree broke away from AT&T seven regional operating companies (RBOCs). Each RBOC was comprised of a collection of local telephone companies that were part of the original AT&T. The RBOCs remained regulated monopolies, each with an exclusive franchise in its region. Microwave transmission was a major breakthrough in long-distance transmission that created the possibility of competition in long distance. Microwave transmission was followed by technological breakthroughs in transmission through satellite and through optical fiber. The breakup of AT&T crystallized the recognition that competition was possible in long distance, while the local market remained a natural monopoly. The biggest benefits to consumers during the past 15 years have come from the long-distance market, which during this period was 198

U.S. Telecommunications Today transformed from a monopoly to an effectively competitive market. However, often consumers do not reap the full benefits of cost reductions and competition because of an antiquated regulatory framework that, ironically, was supposed to protect consumers from monopolistic abuses but instead protects the monopolistic market structure. Competition in long distance has been a great success. The market share (in minutes of use) of AT&T fell from almost 100 percent to 53 percent by the end of 1996, and is presently significantly below 50 percent. Since the MFJ, the number of competitors in the long-distance market has increased dramatically. In the period up to 1996, there were four large facilities-based competitors: AT&T, MCI-WorldCom, Sprint, and Frontier.5 In the period after 1996, a number of new large facilities-based competitors entered, including Qwest, Level 3, and Williams. There are also a large number of “resellers” that buy wholesale service from the facilities-based longdistance carriers and sell to consumers. For example, currently, there are about 500 resellers competing in the California interexchange market, providing very strong evidence for the ease of entry into this market. At least 20 new firms have entered the California market each year since 1984. In California, the typical consumer can choose from at least 150 long-distance companies. Exhibit 1 shows the dramatic decrease in the market share of AT&T in long distance up until 1998, after which the declining trend has continued. Prices of long-distance phone calls have decreased dramatically. The average revenue per minute of AT&T’s switched services has been reduced by 62 percent between 1984 and 1996. AT&T was declared “non-dominant” in the long-distance market by the FCC in 1995.6 Most economists agree that presently the long-distance market is effectively competitive. Exhibit 2 shows the declining average revenue per minute for AT&T and the average revenue per minute net of access charges. Local telephone companies that came out of the Bell System (i.e., RBOCs) actively petitioned the U.S. Congress to be allowed to enter the long-distance market, from which they were excluded by the MFJ. The MFJ prevented RBOCs from participation in long distance because of the anticompetitive consequences that this would have for competition in long distance. The anticompetitive effects would arise because of the control by RBOCs of essential “bottleneck” inputs for long-distance services, such as terminating access of phone calls to customers who live in the local companies’ service areas. The RBOCs enjoyed monopoly franchises. A long-distance phone call is carried by the local telephone companies of the place it originates and the place it terminates, and only in its longdistance part by a long-distance company. Thus, “originating access” and “terminating access” are provided by local exchange carriers to long-dis199















 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4

Exhibit 1. AT&T’s Market Share of Interstate Minutes

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE

200



















 

















Exhibit 2. Average Revenue per Minute of AT&T Switched Services













201

U.S. Telecommunications Today



DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE tance companies and are essential bottleneck inputs for long-distance service. Origination and termination of calls are extremely lucrative services.7 Access has an average cost (in most locations) of $0.002 per minute. Its regulated prices vary. The national average in 2001 was $0.0169 per minute. Such pricing implies a profit rate of 745 percent.8 Access charges reform is one of the key demands of the pro-competitive forces in the current deregulation process. The great success of competition in long distance allowed the U.S. Congress to appear “balanced” in the Telecommunications Act of 1996 by establishing competition in local telephony, while allowing RBOCs into long distance after they meet certain conditions. However, the transition of local markets to effective competition will not be as easy or as quick as in the long-distance markets. This is because of the nature of the product and the associated economics. Many telecommunications companies are presently trying to be in as many markets as possible so that they can bundle the various products. Companies believe that consumers are willing to pay more for bundled services for which the consumer receives a single bill. Bundling also discourages consumers from migrating to competitors, who may not offer the complete collection of services, so consumer “churn” is expected to be reduced. Entry in Local Services as Envisioned by the 1996 Act Currently, the “last mile” of the telecommunications network that is closest to the consumer (the “local loop”) remains a bottleneck controlled by a local exchange carrier (LEC). In 1996, RBOCs (i.e., Ameritech, Bell Atlantic, BellSouth, SBC, and US West) had 89 percent of the telephone access lines nationwide. Most of the remaining lines belonged to GTE and independent franchise holders. Basic local service provided by LECs is not considered particularly profitable. However, in addition to providing access to longdistance companies, LECs also provide lucrative “custom local exchange services” (CLASS), such as call waiting, conference calling, and automatic number identification. The Telecommunications Act of 1996 boldly attempted to introduce competition in this last bottleneck, and, before competition takes hold, the Act attempts to imitate competition in the local exchange. To facilitate entry into the local exchange, the 1996 Act introduces two novel ways of entry in addition to entry through the installation of its own facilities. The first way allows entry into the retailing part of the telecommunications business by requiring incumbent local exchange carriers (ILECs) to sell at wholesale prices to entrants any retail service that they offer. Such entry is essentially limited to the retailing part of the market. 202

U.S. Telecommunications Today The second and most significant novel way of entry introduced by the 1996 Act is through leasing of unbundled network elements from incumbents. In particular, the 1996 Act requires that ILECs (1) unbundle their networks and (2) offer for lease to entrants network components (unbundled network elements [UNEs]) “at cost plus reasonable profit.”9 Thus, the 1996 Act envisions the telecommunications network as a decentralized network of interconnected networks. Many firms, including the large interexchange carriers AT&T and MCI WorldCom, attempted to enter the market through “arbitration” agreements with ILECs under the supervision of State Regulatory Commissions, according to the procedure outlined by the 1996 Act. The arbitration process proved to be extremely long and difficult, with continuous legal obstacles and appeals raised by the ILECs. To this date (October 2002), over six years after the signing of the 1996 Act by President Clinton, entry in the local exchange has been small. In the latest statistics, collected by the FCC,10 as of June 30, 2001, entrant competitive local exchange carriers (CLECs) provided 17.3 million (or about 9.0 percent) of the approximately 192 million nationwide local telephone lines. The majority (55 percent) of these lines were provided to business customers. Approximately one third of CLEC service provision is over their own facilities. For services provided over leased facilities, the percentage of CLEC service (which is total service resale of ILEC services) declined to 23 percent at the end of June 2001, while the percentage provisioned over acquired UNE loops grew to 44 percent. Entry of RBOCs into Long-Distance Service The 1996 Act allows for entry of RBOCs in long distance once a list of requirements has been met and the petitioner has proved that its proposal is in the public interest. These requirements can be met only when the market for local telecommunications services becomes sufficiently competitive. If the local market is not competitive when an incumbent LEC monopolist enters into long distance, the LEC can leverage its monopoly power to disadvantage its long distance rivals by increasing its costs in various ways, and by discriminating against them in its pricing. If the local market is not competitive when an incumbent LEC monopolist enters into long distance, an ILEC would control the price of a required input (switched access) to long-distance service, while it would also compete for long-distance customers. Under these circumstances, an ILEC can implement a vertical price squeeze on its long-distance competitors, whereby the price-tocost ratio of long-distance competitors is squeezed so that they are driven out of business.11 203

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE In allowing entry of local exchange carriers into the long-distance market, the 1996 Act tries not to endanger competition that has developed in long distance by premature entry of RBOCs into the long-distance market. However, on this issue, the 1996 Act’s provisions guarding against premature entry are insufficient. Hence, to guard against anti-competitive consequences of premature entry of RBOCs in long distance, there is need of a deeper analysis of the consequences of such entry on competition and on consumer and social welfare. Currently, RBOCs have been approved in 15 states for in-region provision of long-distance services. As of October 2002, the approved, pending, rejected, and withdrawn applications are summarized in Exhibit 3.12 THE IMPACT OF WIRELESS AND OF CABLE TELEVISION During the past 20 years there has been a tremendous (and generally unanticipated) expansion of the mobile phone market. The very significant growth has been limited by relatively high prices resulting from (1) the prevention of entry of more than two competitors in each metropolitan area, and (2) the standard billing arrangement that imposes a fee on the cellular customer for receiving (as well as initiating) calls. However, during the past six years, the FCC has auctioned parts of the electromagnetic spectrum that will enable the transmission of personal communication services (PCS) signals.13 The auctioned spectrum will be able to support up to five additional carriers in the major metropolitan markets.14 Although the PCS spectrum band is different from traditional cellular bands, PCS is predicted to be a low-cost, high-quality mobile alternative to traditional phone service. Other wireless services may chip away at the ILEC markets, especially in high-capacity access services.15 The increase in the number of competitors has already created very significant decreases in prices of mobile phone services. By its nature, PCS is positioned between fixed local service and traditional wireless (cellular) service. Presently, there is a very significant price difference between the two services. Priced between the two, PCS first drew consumers from traditional cellular service in large cities, and has a chance to become a serious threat to fixed local service. Some PCS providers already offer data transmission services that are not too far in pricing from fixed broadband pricing. Industry analysts have been predicting the impending entry of cable television in telephony for many years. Despite numerous trials, such entry in traditional telecommunications services has fully not materialized. There are a number of reasons for this. First, to provide telephone service, cable television providers will need to upgrade their networks from analog to digital. Second, they will need to add switching. Third, most of the cable 204

U.S. Telecommunications Today Exhibit 3.

Status of Long-Distance Applications by RBOCs in October 2002

State

Filed by

Status

Date Filed

Date Resolved

CO, ID, IA, MT, NE, ND, UT, WA, WY CA FL, TN VA MT, UT, WA, WY NH, DE AL, KY, MS, NC, SC CO, ID, IA, NE, ND NJ ME GA, LA VT NJ RI GA, LA AR, MO PA CT MO MA KS, OK MA TX TX NY LA LA SC MI OK MI

Qwest

Pending

09/30/02

Due by 12/27/02

SBC BellSouth Verizon Qwest Verizon BellSouth Qwest Verizon Verizon BellSouth Verizon Verizon Verizon Bellsouth SBC Verizon Verizon SBC Verizon SBC Verizon SBC SBC Verizon BellSouth BellSouth BellSouth Ameritech SBC Ameritech

Pending Pending Approved Withdrawn Approved Approved Withdrawn Approved Approved Approved Approved Withdrawn Approved Withdrawn Approved Approved Approved Withdrawn Approved Approved Withdrawn Approved Withdrawn Approved Denied Denied Denied Denied Denied Withdrawn

09/20/02 09/20/02 08/01/02 07/12/02 06/27/02 06/20/02 06/13/02 03/26/02 3/21/02 2/14/02 1/17/02 12/20/01 11/26/01 10/02/01 08/20/01 6/21/01 4/23/01 4/4/01 1/16/01 10/26/00 9/22/00 4/5/00 1/10/00 9/29/99 7/9/98 11/6/97 9/30/97 5/21/97 4/11/97 1/02/97

Due by 12/19/02 Due by 12/19/02 10/30/02 09/10/02 09/25/02 09/18/02 09/10/02 06/24/02 6/19/02 5/15/02 4/17/02 3/20/02 2/24/02 12/20/01 11/16/01 9/19/01 7/20/01 6/7/01 4/16/01 1/22/01 12/18/00 6/30/00 4/05/00 12/22/99 10/13/98 2/4/98 12/24/97 8/19/97 6/26/97 2/11/97

industry has taken a high debt load and has difficulty making the required investments in the short run. When it is able to provide switching on a large scale, cable television will have a significant advantage over regular telephone lines. Cable TV lines that reach the home have a significantly higher bandwidth capacity than regular twisted-pair lines. Thus, it is possible to offer a number of “telephone lines” over the cable TV wire as well as broadband (high-bandwidth) access to the World Wide Web that require high-bandwidth capacity. A key reason for AT&T’s acquisition of cable companies was the provision of telephone services through cable. Upgrades to provide telephony proved to be more expensive and much slower than expected, and it is uncertain if Comcast/AT&T will continue the upgrade of cable lines to telephony. The announcement by AT&T of the provision of telephony through cable and the entry of independent DSL providers prompted incumbent LECs to aggressively market their DSL services, because it was generally accepted 205

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE that customers would not switch easily from DSL to broadband cable and vice versa. As the threat of AT&T and independent DSL providers diminished, ILECs reduced their DSL campaigns. At the end of 2001, broadband connections were 63 percent with cable, 36 percent with DSL, and 1 percent with wireless. THE CURRENT WAVE OF MERGERS Legal challenges have derailed the implementation process of the 1996 Act and have increased significantly the uncertainty in the telecommunications sector. Long-distance companies have been unable to enter the local exchange markets by leasing unbundled network elements (UNEs) because the arbitration process that started in April 1996 has resulted in long delays to the decision of final prices. Given the uncertainty of the various legal proceedings, and without final resolution on the issues of nonrecurring costs and the electronic interface for switching local service customers across carriers, entry into the local exchange through leasing of unbundled network elements has been minimal. Moreover, entry into the retailing part of the business through total service resale has also been minimal because the wholesale discounts have been small. In the absence of entry into the local exchange market as envisioned by the 1996 Act, the major long-distance companies are buying companies that give them some access to the local market. MCI merged with WorldCom, which had just merged with Brooks Fiber and MFS, which in turn also own some infrastructure in local exchange markets. MCI–WorldCom focused on the Internet and the business longdistance market.16 WorldCom proposed a merger with Sprint. The merger was stopped by both the United States Department of Justice (DoJ) and by the Competition Committee of the European Union (EU). The DoJ had reservations about potential dominance of the merged company in the market for global telecommunications services. The EU had objections about potential dominance of the Internet backbone by the merged company.17 In June 2002, WorldCom filed for Chapter 11 bankruptcy protection after a series of revelations about accounting irregularities; as of October 2002, the full effects of these events on the future of WorldCom and the entire industry are still open. AT&T acquired TCI, which owned a local exchange infrastructure that reaches business customers. AT&T unveiled an ambitious strategy for reaching consumers’ homes using cable TV wires for the “last mile.” With this purpose in mind, AT&T bought TCI. AT&T promised to convert the TCI cable access to an interactive broadband, voice, and data telephone link to residences. AT&T also entered into an agreement with TimeWarner to use 206

U.S. Telecommunications Today its cable connection in a way similar to that of TCI. In April 1999, AT&T outbid Comcast and acquired MediaOne, the cable spin-off of US West. TCI cable reached 35 percent of U.S. households. Together with TimeWarner and MediaOne, AT&T could reach a bit more than 50 percent of U.S. households. Without access to UNEs to reach all residential customers, AT&T had to find another way to reach the remaining U.S. households. The provision of telephony, Internet access, broadband, data, and two-way video services exclusively over cable lines in the “last mile” requires significant technical advances, significant conversion of the present cable networks, and an investment of at least $5 billion (and some say $30 billion) just for the conversion of the cable network to two-way switched services. Moreover, there is some inherent uncertainty in such a conversion, which has not been successful in the past. Thus, it was an expensive and uncertain proposition for AT&T but, at the same time, it was one of the few remaining options of entry into the local exchange. Facing tremendous pressure from financial markets, AT&T decided on a voluntary breakup into a wireless unit, a cable TV unit, and a long-distance and local service company that retained the name AT&T and the symbol “T” at NYSE. Financial markets tended to underestimate the value of AT&T by looking at it only as a long-distance company. The cable part of AT&T was merged with Comcast, and the full breakup should be almost finished by the end of 2002. In a complicated financial transaction, AOL/TimeWarner plans to divest the part of it that AT&T controls to AT&T/Comcast. Meanwhile, Pacific Bell was acquired by SBC, and NYNEX by Bell Atlantic, despite antitrust objections, in an attempt by the RBOCs to maximize their foothold, looking forward to the time when they would be allowed to provide long-distance service. SBC bought Southern New England Telephone (SNET), one of the few companies that, as an independent (not part of AT&T at divestiture), was not bound by MFJ restrictions and has already entered into long distance. Bell Atlantic merged with GTE to form Verizon, and SBC bought Ameritech. US West merged with Qwest, a new long-distance service provider. Thus, the eight large local exchange carriers of 1984 (seven RBOCs and GTE) have been reduced to only four: Verizon, BellSouth, SBC, and Qwest. The smallest one, BellSouth already feels the pressure, and it has been widely reported to be in merger/acquisition talks with a number of parties. Recently, BellSouth announced a pact with Qwest to sell Qwest’s long-distance service once BellSouth is allowed to sell long-distance service. A crucial cross-media merger occurred with the acquisition of TimeWarner by AOL at the height of AOL’s stock price. The merger was achieved with the requirement that AOL/TimeWarner will allow independent ISPs to access its cable monopoly for broadband services. Synergies and new joint 207

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE products failed to materialize at AOL/TimeWarner, and there is wide speculation that AOL will be divested. The present crisis in telecommunications arose out of an incorrect prediction of the speed of expansion of the Internet. It was widely believed that the Internet would grow at 400 percent in terms of bits per year. In retrospect, it is clear that for the years 2000 and 2001, only 100 percent growth was realized. Of course, it was difficult to pin down the growth rate in the early stages of an exponential network expansion. The Internet was growing at 400 percent per year when the predictions were made. However, the rate of growth slowed with respect to the number of new hosts connected. And because no new “killer application” that required a lot of bandwidth was unveiled, the rate of growth in bits transferred also slowed. This is despite the very fast growth of transfers of bits in peer-to-peer (P2P) transfers of files, mainly songs in MP3 format, popularized by Napster and still going strong even after Napster has been practically closed down. Based on the optimistic prediction of Internet growth, there was tremendous investment in Internet transport and routing capacity. Moreover, because capital markets were very liberal in providing funds, a number of companies invested and deployed telecommunications equipment more than would have been prudent given their then-current market share. This was done for strategic reasons, essentially in an attempt to gain market share in the process of the rapid expansion of the Internet. Once the growth prediction was revised downward, the immediate effect was a significant reduction in orders and investment in optical fiber, switching, and router equipment. Service companies wait for higher utilization rates of their existing capacity as the Internet expands. There is presently a temporary overcapacity of the Internet in the United States. And, as mentioned, because it is easy to run the Internet backbone as a long-distance network, the huge overcapacity of the Internet backbone, combined with new investment and overcapacity of traditional long-distance networks, lead to very significant pressure and reductions of longdistance prices. THE COMING WORLD The intent of the 1996 Act was to promote competition and the public interest. It will be a significant failure of the U.S. political, legal, and regulatory systems if the interests of entrenched monopolists rather than the public interest as expressed by the U.S. Congress dictate the future of the U.S. telecommunications sector. The market structure in the telecommunications sector two years ahead will depend crucially on the resolution of the LEC’s legal challenges to the 1996 Telecommunications Act and its final implementation.18 We have already seen a series of mergers leading to the re208

U.S. Telecommunications Today monopolization of local telecommunications. As the combinations of former RBOCs are approved state by state for long distance, we see a reconstitution of the old AT&T monopoly (without the present AT&T). We also see significant integration in the cable industry as AT&T found it extremely difficult to enter the local exchange market. Whatever the outcomes of the legal battles, the existence of arbitrage and the intensification of competition necessitate cost-based pricing and will create tremendous pressure on traditional regulated prices that are not cost-based. Prices that are not based on cost will prove unsustainable. This includes access changes that LECs charge to IXCs (long-distance providers), which have to become cost based if the vision of a competitive network of interconnected networks is to be realized. Computers are likely to play a larger role as telephone appliances and in running intermediate-sized networks that will compete with LECs and intensify the arbitrage among IXCs. Telephony based on the Internet Protocol (IP) will become the norm. Firms that have significant market share in computer interfaces, such as Microsoft, are likely to play a significant role in telephony.19 Hardware manufacturers — especially firms such as Cisco, Intel, and 3Com — that make switches and local networks will play a much more central role in telephony. Internet telephony (voice, data, and broadband) is expected to grow quickly. Finally, the author expects expect that, slowly but steadily, telecommunications will drift away from the technical standards of Signaling System Seven (SS7) established by AT&T before its breakup. As different methods of transmission and switching take a foothold, and as new interfaces become available, wars over technical standards are very likely.20 This will further transform telecommunications from the traditional quiet landscape of regulated utilities to the mad-dash world of software and computer manufacturing. This change will create significant business opportunities for entrants and impose significant challenges on traditional telecommunications carriers. Notes 1. Critical points in this development were the emergence of GOPHER in the late 1980s and MOSAIC by 1990. 2. In November 1997, Deutsche Telecom (DT) introduced Internet long-distance service within Germany. To compensate for the lower quality of voice transmission, DT offers Internet long distance at one fifth its regular long-distance rates. Internet telephony is the most important challenge to the telecommunications sector. 3. A large enough bandwidth increases the probability that fewer packets will be lost. And, if each packet is sent a number of times, it is much more likely that each packet will arrive at the destination at least once, and the quality of the phone call will not deteriorate. Thus, the provider can adjust the quality level of an Internet call by guaranteeing a lot of bandwidth for the transmission, and by sending the packets more than once. This implies that the quality of an Internet call is variable and can be adjusted upward using the vari-

209

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE

4.

5. 6. 7. 8. 9.

10. 11.

12. 13.

14.

15.

16.

ables mentioned. Thus, high-quality voice telephony is immediately feasible in intranets because intranets can guarantee a sustained, lsufficient bandwidth. There is no impediment to the quality level of a phone call that is picked from the PSTN at the local switch, carried over long distance on leased lines, and redelivered to the PSTN at the destination local switch, using the recently introduced Lucent switches. For Internet calls that originate or terminate in computers, the method of resending packets can be used on the Internet to increase the quality of the phone call, as long as there is sufficient bandwidth between the computer and the local telephone company switch. The fidelity of calls can also be enhanced by manipulation of the sound frequencies. This can be done, for example, through the elemedia series of products by Lucent. The telecommunications sector is regulated both by the federal government through the Federal Communications Commission (FCC) and by all states, typically through a Public Utilities Commission (PUC) or Public Service Commission. Usually, a PUC also regulates electricity companies. Frontier was formerly Rochester Telephone. See Federal Communications Commission (1995). These fees are the single largest cost item in the ledgers of AT&T. Termination pricing varies. In 2001, the FCC reported access charges ranging from $0.011 to $0.0369. The FCC and State Regulatory Commissions have interpreted these words to mean Total Element Long Run Incremental Cost (TELRIC), which is the forward-looking, long-running (minimized) economic cost of an unbundled element and includes the competitive return on capital. See “Trends in Telephone Service,” Federal Communications Commission, May 2002, Tables 9.1–9.6. Avoiding a vertical price squeeze of long-distance competitors, such as MCI, was a key rationale for the 1981 breakup of AT&T in the long-distance division that kept the AT&T name, and the seven RBOCs that remained local monopolists in local service. See Economides, (1998, 1999). Source: FCC, Despite this and other auctions of spectrum, the FCC does not have a coherent policy of efficient allocation of the electromagnetic spectrum. For example, the FCC recently gave (for free) huge chunks of electromagnetic spectrum to existing TV stations so that they can provide high-definition television (HDTV). Some of the recipients have publicly stated that they intend to use the spectrum to broadcast regular TV channels and information services rather than HDTV. We do not expect to see five entrants in all markets because laxity in the financial requirements of bidders has resulted in default of some of the high bidders in the PCS, prompting a significant dispute regarding their financial and other obligations. A striking example is the collapse and bankruptcy of all three main bidders in the C-band auction. In this auction set aside for small companies, the government required that companies do not have high revenues and assets, and allowed them to pay only 10 percent of the winning bid price immediately and then pay in 5 percent installments over time. All large bidders winning were organized with the single purpose of winning the licenses and hardly had enough money to pay the required 10 percent. They all expected to receive the remaining money from IPOs. Given the fact that they were bidding with other peoples’ money, spectrum bids skyrocketed in the C-band auction. Even worse, prices per megahertz were very much lower at the D-band auction that occurred before legal hurdles were cleared and before C-band winners could attempt their IPOs. As a result, no large C-band winner made a successful IPO and they all declared bankruptcy. The FCC took back the licenses but would not reimburse the 10 percent deposits. Thus, a long series of legal battles ensued, with the end result that most of the C-band spectrum is still unused, thus resulting in fewer competitors in most markets. The so-called “wireless loop” proposes to bypass the ILEC’s cabling with much less outlay for equipment. Trials are underway to test certain portions of the radio spectrum that were originally set aside for other applications: MMDS for “wireless cable” and LMDS as “cellular television.” The MCI–WorldCom merger was challenged by the European Union Competition Committee, the Department of Justice, and GTE on the grounds that the merged company would

210

U.S. Telecommunications Today

17.

18.

19.

20.

have a large market share of the Internet “backbone” and could sequentially target, degrade interconnection, and kill its backbone rivals. Despite (1) a lack of an economically meaningful definition of the Internet “backbone,” (2) the fact that MCI was unlikely to have such an incentive because any degradation would also hurt its customers, and (3) that it seemed unlikely that such degradation was feasible, the Competition Commission of the European Union ordered MCI to divest of all its Internet business, including its retail business where it was never alleged that the merging companies had any monopoly power. MCI’s Internet business was sold to Cable & Wireless, the MCI–WorldCom merger was finalized, and WorldCom has used its UUNET subsidiary to spearhead its way in the Internet. The merged company proposed to divest Spring’s backbone. Thus, objections of the EU were based on WorldCom’s market share of about 35 percent in the Internet backbone market. The EU used a very peculiar theory that predicted that “tipping” and dominance to monopoly would occur starting from this market share because WorldCom would introduce incompatibilities into Internet transmission and drive all competitors out of the market. Time proved that none of these concerns were credible. In one of the major challenges, GTE and a number of RBOCs appealed (among others) the FCC (1996) rules on pricing guidelines to the 8th Circuit. The plaintiffs won the appeal; the FCC appealed to the Supreme Court, which ruled on January 25, 1999. The plaintiffs claimed (among others) that (1) the FCC’s rules on the definition of unbundled network elements were flawed; (2) the FCC “default prices” for leasing of UNEs were so low that they amounted to confiscation of ILEC property, and (3) the FCC’s “pick-and-choose” rule allowing a carrier to demand access to any individual interconnection, service, or network element arrangement on the same terms and conditions the LEC has given anyone else in an approved local competition entry agreement without having to accept the agreement’s other provisions would deter the “voluntarily negotiated agreements.” The Supreme Court ruled in favor the FCC in all these points, thereby eliminating a major challenge to the implementation of the Act. Microsoft owns a share of WebTV and has made an investment in Qwest and AT&T, has broadband agreements with a number of domestic and foreign local exchange carriers, but does not seem to plan to control a telecommunications company. A significant failure of the FCC has been its absence in defining technical standards and promoting compatibility. Even when the FCC had a unique opportunity to define such standards in PCS telephony (because it could define the terms while it auctioned spectrum), it allowed a number of incompatible standards to coexist for PCS service. This leads directly to a weakening of competition and higher prices wireless PCS consumers have to buy a new appliance to migrate across providers.

References 1. Crandall, Robert W., After the Breakup: U.S. Telecommunications in a More Competitive Era. Brookings Institution, Washington, D.C., 1991. 2. Economides, Nicholas, “The Economics of Networks,” International Journal of Industrial Organization, 14(2), 675–699, 1996. 3. Economides, Nicholas, “The Incentive for Non-Price Discrimination by an Input Monopolist,” International Journal of Industrial Organization, 16, 271–284, March 1998. 4. Economides, Nicholas, “The Telecommunications Act of 1996 and Its Impact,” Japan and the World Economy, 11(4), 455–483, 1999. 5. Economides, Nicholas, Giuseppe Lopomo, and Glenn Woroch, “Regulatory Pricing Policies to Neutralize Network Dominance,” Industrial and Corporate Change, 5(4), 1013–1028, 1996. 6. Federal Communications Commission, “In the Matter of Motion of AT&T Corp. to be Reclassified as a Non-Dominant Carrier,” CC Docket No. 95-427, Order, Adopted October 12, 1995. 7. Federal Communications Commission, “First Report and Order,” CC Docket N. 96–98, CC Docket No. 95-185, Adopted August 8, 1996.

211

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE 8. Federal Communications Commission, “Trends in Telephone Service,” May 2002. 9. Gregg, Billy Jack, “A Survey Of Unbundled Network Element Prices in the United States,” Ohio State University, 2001. 10. Hubbard, R.G. and Lehr, W.H., Improving Local Exchange Competition: Regulatory Crossroads, mimeo., February 1998. 11. Mitchell, Bridger and Vogelsang, Ingo, Telecommunications Pricing: Theory and Practice, Cambridge University Press, 1991. 12. Noll, Roger G. and Owen, Bruce, (1989), “The Anti-Competitive Uses of Regulation: United States v. AT&T,” in John E. Kwoka and Lawrence J. White, Eds., The Antitrust Revolution, Harper Collins, New York, 1989, 290–337. 13. Technology Futures, “Residential Broadband Forecasts,” 2002.

212

Chapter 18

Information Everywhere Peter Tarasewich Merrill Warkentin

The growing capabilities of wireless technologies have created the potential for a “totally connected” society, one in which every individual and every device is (or can be) linked to all others without boundaries of place or time. The potential benefits of such an environment to individuals, workgroups, and organizations are enormous. But the implementation of such an environment will be difficult, and will require not just the ubiquity of computer technology, but also the transparent availability of data and the seamless integration of the networks that tie together data and devices. This chapter presents a model for a pervasive information environment that is independent of technological change, and presents reasons why such a vision is necessary for long-term organizational success. Welcome to the unwired world. Communication is no longer restricted by wires or physical boundaries. People can communicate with each other anytime of the day or night. Users can access their data and systems from anywhere in the world. Devices can communicate with other devices or systems without the need for human intervention. At least in theory. The technology that currently exists, although still limited in certain regards (such as bandwidth and battery life), enables the creation of the devices and networks necessary for these wireless communications. But data is also part of the communication process for much of what we do, and is essential for communication between devices. Unfortunately, no matter how much effort is invested in creating a seamless technology network, the efforts will be in vain unless an equally seamless data structure is implemented along with it. Organizations must ensure that their information systems are structured to allow manipulation of data on or between the myriad of once and future technologies. Only then will there exist a truly pervasive environment that will benefit the dynamic information requirements of the individual, an organization, or society as a whole. 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

213

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE The idea of pervasive computing is not new. Discussions related to pervasive computing have been oriented toward technological factors. Proponents predict that networked computers (devices ranging from handhelds to information appliances to embedded sensors) will be diffused throughout everything we do and all systems with which we interact. Computing devices will be used to access information on anything and everything. Much attention is being paid to mobile devices and applications, but not to the information systems to which they belong. The ultimate goal should not be to make computing pervasive, but to make information available whenever and wherever it is needed, and to allow for complete flexibility. In his article entitled “Pervasive Information Systems,” Birnbaum1 presented a vision for pervasive computing and technology, but did not address pervasive information systems in their fullest sense. There is often a strong technological orientation in the way information systems are defined. But while an information system is implemented using technology, it should not be bound or driven by it. Furthermore, in addition to hardware and software considerations, an information system comprises people, procedures, and data functioning within an environment, all of which dictate important considerations for design and use. Ubiquitous computing also plays a key role in the vision of a successful pervasive information environment. With ubiquitous computing, computers will be everywhere; they will be so prevalent that they will blend into the background. Technology will be embedded in many everyday devices, such as automobiles, home appliances, and even in building materials.2 Sensors will be able to constantly transmit data from anywhere, while global positioning systems and proximity detection technologies will enable the tracking of devices as they move. In some instances, these embedded sensors will automatically respond to changes in their environment, a concept known as proactive computing.3 However, certain researchers have recognized the overemphasis on the latest gadgetry and gizmos.4 The information appliances and other applications that we see appearing as part of pervasive computing may be nothing more than solutions in search of problems. There is a call for emphasizing data management in per vasive computing, and ensuring that information is available in the spatial or temporal context that will be most useful. Devices are simply used to accept or display data — the infrastructure that ties everything together is the most important concept. Mark Bregman, general manager of pervasive computing at IBM, presented an insightful strategic viewpoint of pervasive information systems during a recent conference keynote address.5 He noted that wireless technologies must be seen as an extension of E-business, but that successful companies need to implement them seamlessly, through a smarter infra214

Information Everywhere structure. Bregman emphasized that once this has occurred, “people will move quickly past the ‘I have to get a device to access the information’ mode of thought to the ‘I have to access the information’ mode of thought.” The Oxygen Project6 advocates an information marketplace model, an environment of freely exchanged information and information services. In addition to this is the idea of “doing more by doing less,” which is based on three concepts. These are (1) bringing technologies into people’s lives (rather than the opposite), (2) using technologies to increase productivity and usability, and (3) ensuring that everyone benefits from these gains. Oxygen calls for general-purpose communication devices that can take the place of items such as televisions, pagers, radios, and telephones. These devices are software-configurable, and can change communication modes on demand (e.g., from a cell phone to an FM radio). Oxygen also calls for more powerful devices to be integrated into the environment (e.g., buildings, vehicles). These devices can control other kinds of devices and appliances, such as sensors, controllers, and fax machines. Oxygen links these two communication devices through a special network to allow secure collaboration and worldwide connectivity. Other research also recognizes the current limitations and conflicting viewpoints of the current mobile computing environment. One vision for pervasive computing is based on three principles7: 1. A device is a portal into an application/data space and not a usermanaged repository of custom software. 2. Applications should be viewed as a set of user-specified tasks, not as programs that run on certain devices. 3. The computing environment in general should be perceived as an extension of the user’s surroundings, not as a virtual environment for storing and running software. A similar vision, part of the Portolano Project, calls for a shift away from technology-driven, general-purpose devices toward ubiquitous devices that exist to meet specific user needs.8 It calls for data-centric networks that run flexible, horizontally based services that can interface with different devices. WHY DO WE NEED A PERVASIVE ENVIRONMENT? A truly pervasive information environment stands to greatly benefit individuals, workgroups, organizations, and society as a whole. The wireless applications that exist today — such as messaging, finding a restaurant, getting a stock quote, or checking the status of a plane flight — are all device and provider dependent. They are implemented on systems that are separate from other Internet or Web applications. Data is replicated from Web servers and stored separately for use with wireless applications. Ulti215

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE mately, a single source of data is needed to allow for any type of situation that could arise. Data must be logically separated from tasks, applications, and technologies. What follows is a series of scenarios meant to illustrate different types of situations that require flexible access to data resources to be effective. Workgroup Meeting A competitor’s introduction of a new product hits a company by surprise, and the company hastily calls a meeting to brainstorm a retaliatory product. Representatives from several of the company’s key suppliers are able to join a cross-functional team at company headquarters. Several other suppliers are participating virtually in the meeting from offices, hotels, and other locations. The company’s marketing executive is currently on a cruise ship in the Mediterranean, but under the circumstances agrees to spend some of her vacation time participating in the meeting from her cabin. Devices used by the participants during the meeting range from PCs to laptops to handhelds, all attached to the Internet, some by wires and some not. Manufacturing calls attention to specifications, electronic drawings, and materials analyses taken from a reverse-engineering session conducted on the competitor’s product. Finance runs an estimate of the materials and labor costs of the product. A supplier says that, based on an analysis he just performed, the performance of such a product could be increased tenfold with very little additional cost. Another supplier adds that some of the other materials could also be substituted with lower-cost alternatives. A new product specification is worked out and approved by the group. Based on current inventories of all the suppliers, production capacity of the company, and estimates of initial demand, a product introduction date is set. The executive notifies her global marketing staff to prepare for the new product launch. Emergency Sensors embedded in the paint of a house detect a rapid increase in temperature and notify Emergency Services that a fire has developed. Firefighting and medical crews are immediately dispatched to the scene. As the vehicles travel, the drivers are shown when and where to turn, by voice and by a visual guidance system. Other people in the fire truck receive information on wireless display tablets concerning the construction of the house. A map of the neighborhood and a blueprint of the burning house are also shown. The fire captain begins to develop a strategy on how to attack the fire, still minutes before arriving on the scene. Based on information received about the occupants of the house, he plans a search and rescue. The strategy details appear on the tablets of the other crew members. 216

Information Everywhere When arriving on the scene, a final assessment of the situation is conducted, and the fire is attacked. Ninety seconds later, someone is brought out of the burning house –— unconscious, but still breathing. Monitoring devices are placed on the patient. The emergency crew, along with a nearby hospital, begin to diagnose the patient’s condition. The identification of the person, who was already thought to be living in the house, has been confirmed through a fingerprint scan. Past medical history on the patient, fed through a decisionsupport system, aids the process of determining a treatment. Doctors at the hospital confirm the course of action and request that the patient be transported to the hospital. Business Traveler A businesswoman is traveling from New York City to Moscow to close a business deal with a large multinational organization. On the way to the airport, her handheld device gives her updated gate information and departure times for her flight. A few minutes after her plane reaches cruising altitude, the phone at her seat rings. It is one of the marketing directors at her organization’s Boston office. He asks her to review some updates that he has made to the proposal that she will be presenting tomorrow afternoon. The proposal appears on the screen embedded in the seat in front of her. She pulls a keyboard out of an armrest and makes modifications to two paragraphs. The director sees the changes as she makes them, and says he agrees with them. He also mentions that he received a call from Moscow five minutes ago, and that he agreed to push the meeting time back an hour. She checks the calendar of her handheld device and notices that it has already been updated to reflect the new time. Her device is also displaying a map showing an alternative route to the meeting, which is being recommended because of problems detected by the traffic sensor network in the city streets. Logistics Management The flexible manufacturing system in a Seattle plant automatically orders components for next month’s production from the Asian supplier. The containers report their position to the shipper and to the plant at regular intervals as the containerized cargo is loaded onto ships by cranes at the cargo terminal, as they cross the ocean, and as they are offloaded onto flatbed rail cars or trucks. Because tens of thousands of containers on thousands of ships report their location to the global logistics network linking all supply-chain partners, the entire system is continually rationalized to reduce waste and delays. New times lots are automatically calculated for scheduling ground transportation and for activities pursuant to anticipated delivery times. 217

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE Overall Pervasive Environment The underlying activities and problems addressed in each of these scenarios are not uncommon. But what is unique is the use of a pervasive information environment that allows for efficient real-time solutions to each problem. The technology in each situation is independent of the task and can utilize any required data sets. This requires a model that divorces data management from the applications that use data and the devices that interface with the applications. The following section describes such a model. MODELING THE PERVASIVE INFORMATION ENVIRONMENT Organizations must maintain a focus on IT as an enabler for information systems. Technology will provide the ability to communicate over the Internet anytime from anywhere. The form of input and output is determined (and limited) by the I/O device used (e.g., cell phone, PDA, embedded sensor, robot, laptop, personal computer). Wireless and mobile devices may always have limitations in terms of screen size, interactivity, and communication speed relative to physically connected devices.9 Welldesigned processes are a necessity and will ensure that information systems function well, no matter what technology is used to access them or what the user needs from them. System flexibility is required to support users in this new, fast-paced dynamic global environment. The user should never have to concentrate on the task of using technology or applications, but only the task at hand. Access to data and systems should be straightforward and intuitive, without regard to configuring devices, reformatting output, selecting protocols, or switching to alternate data sets. The governing principle for establishing a pervasive information environment is “access to one set of information anytime from anywhere through any device.” To accomplish this goal, information systems should be structured according to the four-layer model presented in Exhibit 1. This model is an extension of similar multi-layer models that have been used for database and network systems. The user interacts only with the highest layer: the presentation layer. In this layer, the devices utilized to access, send, and receive information must be selected and implemented. Bandwidth limitations will affect the use of most of these devices, especially wireless ones, for the foreseeable future. The application logic layer comprises the applications that process and manipulate data and information for use on devices. Application logic can reside on a device itself or elsewhere within the information system. Applications may also provide context for converting raw data into organizational knowledge. Data access concerns the actual retrieval of stored data or information, and the execution of basic processing, if required. Database queries fall into this level. At the data access level, the use of various wireless and other protocols facilitates the smooth transfer of data from the disparate sources of data. 218

Information Everywhere

Presentation Application Logic Data Access Data

Exhibit 1. Pervasive Information Systems Architecture

The lowest layer is the data storage layer, which forms the foundation for all information networks. Its focus is how and where to store data in an unambiguous secure form. Data integrity is critical, so the underlying data structures (whether object oriented, relational, or otherwise) must be carefully conceived and implemented. One crucial issue is data format; incompatible data formats can prevent flexibility in the applications that access the data. We must maintain an environment in which data from heterogeneous and distributed sources, including embedded technologies, can be readily combined and delivered. This may require the use of middleware to achieve compatibility between the data sources. In addition to issues affecting individual layers, there are some that concern the interaction between layers. Seamless transfer between presentation environments will be necessary when moving from location to location and from device to device. Security issues will affect all four layers. Organizations must also ensure that their network communications are not intercepted in this “anytime, anywhere” environment, and that data privacy requirements are met. These challenges will continue to shape the process of designing and implementing truly useful pervasive systems in the future. CREATING THE PERVASIVE INFORMATION ENVIRONMENT To accomplish the goal of a flexible, pervasive information environment, one set of data must be maintained. Yet this data must be available to any application and any device, now or in the future. This information vision will become an imperative for all organizations as they struggle to compete in a rapidly changing information environment. There may be many different ways to implement the model described in this chapter, and these paths may be difficult to execute. The remainder of this chapter, while not proposing or advocating a specific implementation design, describes research that might support a solution. Also presented are general concerns that must be addressed to achieve this environment. 219

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE The Oxygen Project described above may offer the best implementation design to allow for the vision of pervasive information environment articulated in this chapter. Although it appears to allow a great amount of flexibility, it is still very device dependent. Access and collaboration revolves around specific devices, although they seem to be multi-purpose and can communicate with other devices. Furthermore, Oxygen does not address the concerns of the data infrastructure necessary to support such a pervasive information environment. Evolving technologies may provide the foundation for implementing the individual layers of the pervasive information model. The presentation layer may benefit from several of these technologies.7 One is the concept of a distinct user-interface management system, which clearly separates the user interface from application logic. Web technologies such as Java applets, which are device-independent programs, may also be useful. The relatively newer service technologies (also known as E-services), which allow applications to dynamically request needed software services from outside sources, are also promising. One solution for the application layer and its interface with the presentation layer has already been proposed, using an application model based on a vision for pervasive computing described previously in this chapter.7 The model is currently being implemented as part of continuing research, based on the assumptions of a services-based architecture wherein users will interact with applications through devices that are most readily available at the time. The Portolano Project also supports this application model, with research focused on practical applications such as a pervasive calendar system and infrastructure issues such as service discovery. The most difficult layers to address are the data and data access layers. For the sake of application independence and data integrity, data representation format must be independent of the presentation format. Yet raw data is rarely useful without the lens of organizational and environmental context. Although raw data itself is not useful for most purposes, and must be converted into information to provide meaning and value to decision makers and processes, the data representation format is critical. Logical data storage designs must be based on sound principles of data integrity. And the metadata or context must also be stored and communicated to the applications or devices that use it. Further complications arise if the context changes, or if the data can be used in multiple contexts. Data and metadata models must be carefully designed to ensure that all systems and individuals using the data will be presented with meaningful and accurate data within the context of its use. Raw data must always be available and metadata must be maintained independently to ensure that they can be altered as the environment and context change. Thus, metadata really forms a sub220

Information Everywhere layer on top of the lowest layer, converting the data into information as it is acquired by higher layers. In terms of inter-layer issues, there are several technologies that might play a part in creating a secure, seamless environment. Agents could be used to follow a user from place to place and from task to task. Research conducted with software agents shows that they can be used to facilitate the movement of users from one device to another.10 Data must also be protected as it travels to where it is needed. Risks of unauthorized data interception can be reduced through frequency hopping and encryption, and biometric technologies can be used to verify the identity of people who are accessing data through different devices. CONCLUSION One corporate mantra from the early 1990s was “the network IS the computer.” The concept of pervasive computing depends on pervasive access to all data. If we migrate toward a pervasive computing environment, we also need ubiquitous, secure access to all data from any device, wired or wireless. This ubiquitous network means that we will not only have access to our data and information from everywhere, but that it will reside on the network, where it will be secure and accessible from multiple locations and with multiple devices. This valuable corporate asset should be accessible from anywhere, and transparent data availability needs to be maintained. However, data should not be stored everywhere and anywhere. By following the multilayer approach described above, organizations can ensure that data will be available to users and automated systems whenever and wherever they are in a secure manner with embedded contextual metadata. The days of individuals “going to the data” (walking to computers tethered to the network) instead of having the data come to the individual are numbered. The world is quickly becoming a place of full-time and ubiquitous connectivity. Technology is marching forward, but these advances often precede the development of corporate and public policies and procedures for integrating and managing them. Decisions concerning system architectures necessary for maximum leverage must be carefully evaluated and executed. It is also necessary to evaluate the extent of the benefits that this connectivity will have on individuals and organizations. With the current technology-driven model, the benefits of wireless technology will be limited. A more comprehensive perspective must drive the process, one that enables seamless integration of networks. The pervasive information model presented in this chapter can foster a flexible environment that can meet all users’ long-term dynamic needs and ensure that the real potential of the technologies can be achieved. 221

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE References 1. Birnbaum, J., “Pervasive Information Systems,” Communications of the ACM, 40(2), 40–41, February 1997. 2. Estrin, D., Govindan, R., and Heidemann, J., “Embedding the Internet,” Communications of the ACM, 43(5), 38–41, May 2000. 3. Tennenhouse, D., “Proactive Computing,” Communications of the ACM, 43(5), 43–50, May 2000. 4. Huang, A.C. et al., “Pervasive Computing: What Is It Good For?,” Proceedings of the ACM International Workshop on Data Engineering for Wireless and Mobile Access, 1999, 84–91. 5. O’Hanton, C., “IBM: Wireless E-Commerce Next Revolution,” Computer Reseller News, June 13, 2000, http://crn.com/dailies/digest/ dailyarchives.asp?ArticleID=17468. 6. Dertouzos, M., “The Oxygen Project: The Future of Computing,” Scientific American, 281(2), 52–55, August 1999. 7. Banavar, G. et al., “Challenges: An Application Model for Pervasive Computing,” Proceedings of the Sixth Annual ACM/IEEE International Conference on Mobile Computing and Networking, 2000, 266–274. 8. Esler, M. et al., “Next Century Challenges: Data-Centric Networking for Invisible Computing. The Portolano Project at the University of Washington,” Proceedings of the Fifth Annual ACM/IEEE International Conference on Mobile Computing and Networking, 1999, 256–262. 9. Tarasewich, P. and Warkentin, M., “Issues in Wireless E-Commerce,” ACM SIGecom Exchanges, 1(1), 19–23, Summer 2000. 10. Kotz, D. et al., “AGENT TCL: Targeting the Needs of Mobile Computers,” IEEE Internet Computing, 1(4), 58–67, July 1997.

222

Chapter 19

Designing and Provisioning an Enterprise Network Haywood M. Gelman

The purpose of this chapter is to provide an overview of managing the task of designing and provisioning an enterprise network infrastructure. This chapter examines all the major aspects of architecting a network, including analyzing customer needs, determining a budget, managing vendor relationships, writing and awarding an RFP, and finally implementing the newly designed network. The chapter does not analyze this process at a highly technical level, but rather from the perspective of a manager who has been tasked with leading a network design project for an enterprise. At the end of this chapter, there is a reading list of selected titles readers can use for more detailed study. Although it would be tempting to claim that network design is a simple process, just the opposite is often the case. It requires attention to detail, adherence to well-established design principles, and a healthy dose of creative human resources and strict financial management. KNOW WHAT YOU HAVE NOW If you already have an existing network, performing an equipment inventory is a critical first step. If you are designing a completely new installation, you can skip to the next section, “Planning Is Half the Battle.” Determining the inventory of existing networking resources is a painstaking but vitally important process that includes analyzing everything from copper and fiber-optic cabling plants, to the network interface cards installed in desktop computer systems, to routers, hubs, and switches that connect your computing resources and peripherals. This is an essential stage to success because later steps depend on this being completed very accurately. Essential tasks during this stage include the following: 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

223

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE • Hire a networking cabling contractor that can work on fiber optics as well as copper to assist in assessing the current state of your cable plant. Have the contractor perform the following: — Check all your data jacks with a continuity test. This test ensures that faulty cables do not cause data transmission errors. — Check all data jacks to make sure they are all Category 5 or better. This will ensure that whatever networking equipment you put in place, your cable plant will operate with it seamlessly. — Make sure that none of the jacks have been installed using what is called “split-pair wiring.” This is when a Category 5 data cable, which has four pairs of wires, has both pairs connected to separate computers hanging off the same cable. This cost-saving cabling technique should be avoided at all costs because of potential interference problems it can create. — Test, count, and label all copper and fiber-optic cabling. The contractor should produce a report that details, by data closet, what both cable plants look like, with a diagram of all of the cabling. (We discuss this report at a more detailed level later in this chapter.) • Hire a networking consultant to do a baseline bandwidth utilization analysis and a network infrastructure analysis. Work that you should consider having the consultant perform for you include the following: — Perform a baseline bandwidth utilization analysis to determine the utilization of your current network. Also require the networking consultant to return after your network installation is complete to perform a post-installation analysis. This will allow you to compare bandwidth utilization after your installation to what it was before. Require the networking consultant to produce spreadsheets and charts detailing the analysis, as well as a statement on how the data was gathered and later analyzed. Having the original data will allow you to generate your own charts and what-if scenarios at a later date, as well as have a historical record of the work that was performed. In addition, be sure the networking consultant uses the same data-gathering techniques both pre- and post-install to allow for a true like-comparison. — Routers, switches, and hubs, including the make, model, serial number, memory, software and hardware revisions, and available interfaces for each device type. It will be important to know the status of your routers, hubs, and switches later on as you move into the design phase. Gathering this information now will allow you to make the decision later as to which network equipment you will keep and which equipment you will replace. 224

Designing and Provisioning an Enterprise Network — Desktop computers and servers, including the make, model, serial number, operating system revision, applications, memory and hard disk capacity, available hard disk space, system name, IP address, address assignment method (static or dynamic), and speed of available network cards. Gathering this information now will help you figure out later whether you need to replace network cards, upgrade operating systems, install larger hard drives, or install more memory when you get to that stage of the design process. — Network printers, including make, model, serial number, printer name, IP address, address assignment method (static or dynamic), and network interface card speed and type. This information will be important to note later on when it comes time to decide whether or not the new applications you will place on your network can still make use of existing printers. — Remote access services: have the consultant gather information on the access method (modems or virtual private networking). If you are currently using modems, it would also make makes sense at this time to have the consultant work with your telephony manager to perform a cost/benefit analysis on the potential cost savings of a VPN solution. Part of your budget will be determined by the results you get from your network infrastructure analysis. For example, you might find that the copper wiring in your walls is all Category 5 cabling, capable of supporting gigabit speeds over copper, but that your patch panels are only Category 3 (phone spec) wiring. Your cabling contractor, who can replace your patch panels at a much lower cost than replacing the entire infrastructure, can easily remedy this problem. Also, the implementation decision on how your migration strategy will take place later on will be aided by knowing how much available fiber-optic cabling you have in your building or campus. Because the latest networking technologies use fiber-optic cabling to transport data, whether or not you migrate to your new network over a short period of time (often called a “flash-cut”) or with a phased approach will be determined by the status of available fiber-optic pairs in your cable plant. If you have enough fiber-optic cabling, you will be able to build your new network parallel with your old one, interconnect the old network with the new network, and then migrate over time. If you do not have sufficient fiber, or the budget to add more fiber, you will likely need to use a flash-cut migration strategy. Finally, knowing what networking equipment and desktop connectivity you have now will allow you to make some critical path decisions early on as to which direction you will take for your desktop connectivity requirements. If your equipment is no longer on maintenance, you are running an antiquated network technology (such as Token Ring), or your network equipment has been end-of-lifed by the manufacturer, you 225

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE will need to give serious thought to replacing it at this time so you can ensure supportability for your network over the long run. PLANNING IS HALF THE BATTLE The best-built network starts with a strong foundation rooted in the requirements that meet the needs of the users it will service. The first step in finding out what the network will be used for is to ask the right people the right questions, and to ask many of them. Some essential questions to ask include: • What applications will be utilized on the network? File and print services? Web services? Client/server? Terminal applications? • What types of traffic flows can be expected? Are traffic flows going to be bursty in nature (as with file and print services), or are they going to be streaming in nature (as with voice and video)? Is there going to be a combination of the two (as is the case with most networks these days)? Will you plan for the future need for voice and video, or will you be implementing it right away? • How many users does the network need to support? How many IP addresses are necessary now, and how large will the network likely be in the next five to seven years? • Is IP telephony a requirement for this network? If so, what features do we need when the system goes live? Do we need just basic services, such as call answer, call park, call transfer, and voice mail, or do we also need meet-me call conferencing and unified messaging? • Are any new applications planned for the near or not-too-distant future? Are these applications streaming in nature (such as videoconferencing), or are they batch-oriented (like Citrix)? Will the new applications be used in conjunction with a Web-browser front end, or will the application use its own client? It should be noted that most major enterprise resource planning (ERP) and database applications (e.g., Oracle and SAP) can use both. • Will any of the applications on the new network benefit from quality of service (QoS) congestion management techniques? All networks have congestion at one point or another, but the true equalizer lies in how well your potential vendors handle congestion on the network. Technologies such as streaming video (in the form of videoconferencing) and IP telephony require you to deploy some form of QoS to guarantee delivery of this data over less-critical data. Asking the right questions (and carefully documenting the responses for later use) can save a lot of aggravation and money. This is especially true if it means that you do not have to go back to the design drawing board after vendors have already been selected and equipment purchased. 226

Designing and Provisioning an Enterprise Network KNOW YOUR BUDGET Once you have done your network infrastructure analysis and your needs analysis, it is time to take a hard look at the numbers. Be realistic about how much you can afford to accomplish at a time. Knowing what you can afford to spend at this stage of the game will help save you from serious irritation and lost credibility with vendors later on. If you have a limited budget, your users will be better served if you break up your network build-out over two or three budget cycles rather than try to build a network all at once that does not meet the objectives uncovered during the needs analysis. Although it might seem like we are putting the cart before the horse, this is how the capital expenditures game is often played: before you can spend any money, you have to know how much money you can spend. The following are some suggestions to keep you from making any serious mistakes: • Start looking at the potential networking vendor products that will be considered for your network implementation. Select vendors whose products have the features you are looking for, but try not to get hung up on price at this point. What you need are product literature and list prices from the vendors who will be likely providers of bids for your project. You should be able to get product literature and list prices directly from the manufacturers’ Web sites, without committing yourself to a reseller at this early stage. In your analysis of vendor products, you will want to consider networking products that meet the features, port density, and performance requirements uncovered during your needs analysis. • Learn how each vendor implements its equipment in a typical network scenario. You do not have to be a networking expert to do this effectively. At its simplest level, network design is an interconnection game: the design starts at the point where you plug in the equipment. All vendors should be able to provide you with recommended network designs, and you will quickly notice some very obvious similarities. The specifics of this part of the design are outside the scope of this chapter, but you should spend some time learning the basics of how each vendor implements a typical network. If you are inexperienced in network design, this will be a great opportunity for you to get involved at a very basic level. The reason this is important is because when resellers come in to pitch a network design, you should at least be familiar with the basic network design models for each of the manufacturers they represent. • Decide whether to buy chassis-based switches or fixed-configuration LAN switches. Although this may seem like a design decision as opposed to a budget decision, the cost differential between chassisbased switches and fixed-configuration switches can be as high as 30 227

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE percent. This is often enough of a cost difference to force this decision to be made during the budgeting process. This decision will be made based on your available budget, your port-count requirements, your performance requirements, and your plan to manage the network. Some things to evaluate when making this decision include: — Fixed configuration switches have a fixed number of ports, whereas chassis-based switches have slots where line cards are inserted. If your budget allows, having chassis-based switches in a large network will mean fewer switches to manage. To add more ports to a chassis-based switch, simply insert additional line cards in the chassis. To add more ports to a network that uses fixed-configuration switches, you will need to add additional switches. However, in a small environment, a chassis-based switch would be inappropriate unless your performance requirements and budget dictate it. — In general, chassis-based switches have better performance, features, and functionality, while fixed-configuration switches typically have a subset of the features and functionality as their chassisbased counterparts, yet are more cost-effective. Many companies opt to get the best of both worlds: put chassis-based switches in the core of the network where high performance is dictated, and put fixed-configuration switches in the data closet where cost-effectiveness is warranted. VENDOR MANAGEMENT: GENERAL STRATEGY Perhaps the most difficult, yet an essential, part of this process is vendor management. There are vendors of all shapes and sizes with which you will have to deal. This includes everyone from networking manufacturers (some of whom sell direct to end customers and some who do not), to resellers (also known as “value-added resellers”; the value-add comes from the installation services they will sell you with the equipment you choose), to network engineering and security companies that only sell services, not equipment. To keep with the focus of this chapter, we will discuss networking manufacturers. This includes routers, switches, firewalls, and VPN (virtual private network) devices. First, you will need to decide which vendors will be considered for the RFP (Request for Proposal) that needs to be written. Specifics of the RFP process are detailed later in this chapter. To determine which vendors will be considered, you need to do a great deal of research. Your past experiences can be an excellent guide: If you have worked with a particular vendor’s products in the past, using what you know about that vendor will give you a better foundation from which to work with new vendors. Some things you will want to consider when evaluating networking vendors include: 228

Designing and Provisioning an Enterprise Network • Innovativeness in product design and functionality. There are many networking vendors in the market, any number of whom might make the grade as a potential vendor for your network build-out. However, you should more heavily consider those that are helping to push the technology envelope to make the products you are buying now work even better in the years to come by adding new functionality to protect your investment. Some ways to determine whether or not your vendors are technology innovators include: — How much money does a manufacturer spend on research and development compared to revenue? This is an important statistic to look at because it tells you how much money the company is making, and how much of that money is going right back into improving your products. — Does the manufacturer hold prominent seats on IEEE standards committees? The reason this is important is because it will give you a strong indication of the relative importance that each manufacturer places on the development of emerging standards. Certainly, being a technology innovator is important but the development of new technology standards makes all vendors better. This is in stark contrast to the vendor that develops innovative technology that cannot be ported to other vendors, thus locking you into that vendor’s proprietary technology. — Has the manufacturer won industry recognition for product innovation in the form of awards? Numerous industry magazines have annual product awards in various categories, ranging from LAN switching, to quality-of-service congestion management, to routing, to security. Winning product awards will give you a good view of how the industry feels about your list of vendors’ level of innovation relative to one another. • Customer support. This should be one of the most important consideration points when deciding which vendors to consider for your network implementation. This is important because having the most innovative products at the best price means nothing if you cannot fix the equipment when it breaks. Once the equipment is implemented, you will be responsible for managing the network on a day-to-day basis, and you will need to call the manufacturer when it comes time to get help. Some ways to determine how good a manufacturer’s customer support is include: — Industry recognition. It is important to know what the rest of the industry thinks about the customer support for the vendors on your list. Generally, this recognition comes in the form of industry awards from customer-support trade magazines. 229

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE — After-hours support. You need to be very careful when analyzing vendors, and especially cognizant of the after-hours support provided by your vendors. This is important is because network problems are not, unfortunately, limited to business hours. Without 24/7 coverage, you may have to wait for an engineer to be paged who will call you back, or worse yet, be directed to a Web site for support. Each vendor should have a 24-hour, 7-day-a-week coverage model. This is especially important if your company has international locations and operates in multiple time zones. — First-call closure rates. This is a measure of how often a problem is solved on the first call to the customer support center. This will give you a feel for the general skill level of the first-level support team in its ability to effectively close calls quickly. — Vendor’s Web site content. It is important to know how much and what kind of information is available through the vendor’s Web site. Make sure that common items such as manuals, configuration guides, software, forms for obtaining software licenses, and the ability to both open and track trouble tickets through the Web site are all easily accessible. Make sure that the vendor’s Web site is easy to navigate, because you will probably be using it extensively. — One phone number to call for support. If you plan to purchase more than just LAN switches (including routers, firewalls, and VPN devices), it is important to know whether or not you will need to call the same phone number to receive support on all devices or if you will need to call a different number for each product. Oftentimes, large manufacturers will grow their product portfolio by purchasing companies instead of developing the products themselves. When this happens, sometimes the product’s original support team is not integrated into the overall support system for all of that vendor’s products, and support thus becomes inconsistent. Be wary of vendors that give you different contact numbers to receive support on different products from the same vendor. — Local engineering support. It is important to find out about the local presence of the sales and engineering support staff in your area. It is often necessary for customer support to coordinate a visit to your facility with an engineer or salesperson from the local office near you where the network was sold. Vendors with limited penetration in certain areas of the country may have large areas covered by only a few account teams. This could mean trouble if you need to have an engineer come onsite to help you solve a problem, especially if that engineer lives two states away and cannot come to see you for several days due to time conflicts with other commitments. A vendor that has a large local presence will likely be able to send an engineer right away because he or she lives and works nearby. 230

Designing and Provisioning an Enterprise Network • Pay particular attention to the availability of industry training and technical certifications for the products you have chosen. Several manufacturers these days offer training and technical certifications that you can take to help you learn how to better support your own network. The goal of all manufacturers should be to help their customers learn as much as possible about the equipment they are selling them so customers will be able to fix many problems on their own. Be cautious of any vendor whose products do not have third-party training available from certified vendors for their products. Technical certifications are also a plus because this will offer the customer industry recognition for their efforts in learning your equipment, and give them transferable skills they can take with them as they make their way through the networking industry. • Be cognizant of the financial standing of the vendors you have chosen. Try not to pay too much attention to marketing reports. Ask your vendors for financial statements and hold them accountable for the financial concessions they agree to provide as part of the agreement you sign with them. • Ask the vendors for large-scale references on networks based on the products that will be placed in your network. Make certain that the references they give you are relevant. The references should be from networks that use the same products that are in your proposed network, and should be of comparable size to your network. In addition, you should ask for references of customers who will allow an on-site visit to inspect the installation, or at least will agree to a phone interview on the proposed products. Either of these will provide you with the opportunity to have an open discussion with someone else who has built a network based on the same products you plan to install. Next, you need to decide whether you want to use the best-of-breed approach (where you pick a different vendor for each product that you need) or a single-vendor approach (where you use one vendor that has all the products you need). This is an important consideration, especially if you are looking to implement more than just routers and switches. Several companies build routers and switches, but only a few also build other devices such as firewalls and VPN servers. Although many companies these days use a single-vendor approach, there is a formula to help you determine what is best for you. The variables needed to calculate this equation include: • The cost of managing multiple vendor platforms (Do you have enough staff?) • The cost of maintenance agreements from multiple vendors (Do you have enough money?) 231

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE • The time required to properly develop the skillsets required for administering multiple vendors (Do you have enough time?) If your calculations for a best-of-breed approach yield values that are too high for your environment, you should consider the alternative: use a single-vendor approach. A single-vendor approach will, in many cases, save you staff, money, and time. A single-vendor environment will allow you to: • Save staff by requiring fewer people to manage a smaller collection of equipment • Save money by having one maintenance agreement to manage all of the products in your network infrastructure • Save time by only having to train your staff on one operating system to manage the equipment VENDOR MANAGEMENT: COMPETITION With all the vendors to choose from, you will most certainly be able to narrow the list to a selected few once you take into account your port requirements, performance and application needs, and your budget. When managing a list of vendors in a competitive environment, you will want to consider some of the following: • When a network vendor makes a claim about either its equipment or that of a competitor, put that vendor to the task of proving it. Do not be concerned about insulting your potential vendors; they have come to expect their customers to question them. • Be wary of vendors that make inflammatory comments about their competition without providing unbiased, substantive proof. The difficulty most vendors have is not in providing proof of their claims — it is providing proof that has not been compromised in some way. • Look very carefully at the testing procedures that were employed, and be prepared to ask some tough questions. These questions should include: — Are there any procedures that were employed during the testing process that would unfairly bias one vendor over another? — Were features disabled that might create a false-negative in the test results? — Was production, publicly available software used during the testing, or was one-off test-built code used? — Did one vendor pay a third party to produce the test, unfairly and unequivocally biasing one vendor over another? These are some of the questions you should be prepared to ask if you want to keep your vendors honest. 232

Designing and Provisioning an Enterprise Network VENDOR MANAGEMENT: PRODUCT INTEROPERABILITY Sometimes it is not possible to replace a network with another vendor’s equipment overnight, but it is necessary for equipment from one vendor to coexist on a network with equipment from another vendor, until such time that it is possible to have a single-vendor network. Circumstances that would require you to have a heterogeneous network include a limited budget, servers or printers whose addresses cannot easily be changed, or simply the sheer size of the network that prohibits an overnight replacement. Whatever the reason, your new equipment needs to have proven interoperability with the equipment you already have. As part of your vendor selection process, you need to ensure interoperability. This includes: • Require each vendor to produce a statement of adherence to standardsbased frame tagging. Frame-tagging (more commonly known as 802.1Q) allows two vendors’ products to seamlessly coexist on a switched internetwork. Packets from one vendor product are tagged with a standard format so that one vendor can properly pass data generated by another vendor. • Require a statement of adherence to the IEEE 802.1d Spanning-Tree standard. Spanning-tree is a link-layer (Layer 2) protocol designed to ensure a loop-free topology. Spanning-tree uses BPDUs (Bridge Protocol Data Units) for sensing when a loop has occurred, and automatically shuts off links between switches that will create problems. • Require proof of interoperability. This will be comprised of interoperability testing that should be conducted at your facility using network equipment that exists in your network, and the equipment from the vendors you have chosen. The selected vendors should provide their equipment for the test to be conducted, as well as an engineer to help with the testing. You should also be able to provide the selected vendors with a piece of equipment from your network, so the interoperability testing will be accurate. If any of the vendors you have selected at this point do not meet even one of the above requirements, they should be excluded from any further consideration. SECURITY CONSIDERATIONS Security is a critical component of any network infrastructure. Particular care should be taken in today’s world to make sure that all of the most important security threats are addressed. It is no longer sufficient to simply lock the front door of your network to protect against outside intrusions: it is necessary to know who is knocking on your door, and whether or not the knocks came from the inside of your network or the outside. 233

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE Some things to consider when addressing network security issues with potential vendors include: • Make sure each proposed product has the ability to implement some level of security at every point of ingress or egress on your network. This includes anything from access control lists on routers and switches to protect you from ingress traffic, to security measures that restrict port access on switches down to the MAC (Media Access Control) address of the network card. You might not implement every available security feature, but the lack of feature availability should not be the reason underlying this. • Most security breaches in a networked environment are initiated from inside that network. Sometimes, this means that a disgruntled employee could be trying to gain retribution for perceived mistreatment on the part of the corporation. More often than not, it actually means that one of your computer systems has been compromised from the outside and is being used as a point of attack from the inside of your network. Inside attacks can take the form of denial-of-service (DoS) attacks (where a computer is programmed to send millions of packets of worthless data to servers on your network until the servers have to be shut down), viruses sent in e-mail or Web pages, or hidden applications that can be later exploited under remote control through holes in the computer operating system. Potential network vendors need to have the ability to allow you to track individual computers using an IP address or MAC address. The reason this is important is because in a switched internetwork, there are no shared segments where MAC addresses and IP addresses can easily be discovered. The vendor should have a user-locator tool that will allow you to easily input a MAC address or IP address and have the tool tell you the switch and port number to which the computer is attached. • Make sure that every vendor has support for standard security access protocols such as 802.1x, a new and important protocol for port-level security. This standard, which uses EAP (Extensible Authentication Protocol)/ LEAP (Lightweight Extensible Authentication Protocol) for user authentication, can be implemented on switch ports where desktop computers are attached to the network, or on wireless access points to allow laptop computers with wireless Ethernet cards to have secure access to your network. As long as each vendor has implemented 802.1x, you will be able to take advantage of this security feature when your infrastructure is ready to support the technology. • Consider purchasing an intrusion detection system (IDS). Numerous vendors offer competing products that will allow you to analyze what are known as “hack signatures.” A hack signature is not simply a rogue packet that is running untamed through your network, but rather it is a set of actions that are strung together in a certain order to form a 234

Designing and Provisioning an Enterprise Network pattern that can be recognized and acted upon. IDSs will be connected to your network typically at the point of entry (behind your Internet router) and will passively collect data as it goes in and out of your Internet router. An IDS is designed to reassemble all the data it passively collects into its component application, analyze it for any known hack signatures, and act upon what it finds. Actions can be as simple as notifying you when a hack has been attempted, or can take the form of an access list that is dynamically placed on your Internet router to block the attack at the point of ingress. An IDS can be a powerful tool in protecting your network, and in helping to capture would-be perpetrators by employing auditing capabilities whose export can be given to law enforcement authorities to aid in the capture of would-be criminals. Make no mistake — no matter how young or old, whether for fun or profit, any individual who knowingly compromises your network is a criminal who should be punished to the full extent of the law. Become familiar with security terminology. Knowing the language of the trade will allow you to be conversant with those vendors you wish to evaluate for an IDS purchase. It will also help you better understand the full extent of all the potential hacks that could be perpetrated so you can make an informed decision as to what level of security you would feel comfortable with implementing in your network. WRITING A REQUEST FOR PROPOSAL (RFP) Now that you have completed all of your analyses and have selected a list of vendors and products to consider for your new network, you will need to write a Request for Proposal (RFP). The RFP should address all the issues that were previously discussed in this chapter, and take into account all your research. The RFP should require the selected vendors to produce the following information: • Overview on the background of the vendor. This can include a history of the vendor in the industry, maturity of the proposed products, and a list of resources that can be publicly accessed for further research. • A diagram of the new proposed network. • A detailed description of how the new network will function. • A statement of compliance with all of the features that the new network will need to support. This information will come from the needs analysis you performed at the onset of the project. • A bill of materials detailing list prices and discounts for each item purchased. Be sure there are no hidden costs; the quote needs to include a detailed breakdown of all chargeable items, such as hardware and software, installation costs, and ongoing maintenance. • A statement of compliance with 802.1Q interoperability. 235

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE • A statement of compliance with the 802.1X security authentication protocol • A detailed test plan to test interoperability with your existing network equipment. • A detailed implementation plan for installing the new network. This test plan should take into account interconnecting to an existing network, migration plan recommendations, a timetable, and installation costs broken down by the hour. It is common to require that the vendor provide you with a not-to-exceed price for installation services. This gives the vendor the ultimate incentive to either put more engineers on the job to complete the implementation quickly, or lose money. Simply put, a not-to-exceed quote for installation services means that any time spent by the vendor past the agreed upon cost limit is at the loss of the vendor. • An overview from the vendor on customer support. Items that should be addressed include: — Hours of operation — The number and location of the vendor’s customer support centers — First-call closure rates — Content of the vendor’s Web site (manuals, configuration guides, software, access to forums, ability to open trouble tickets on the Web) — Local engineering presence — Any awards that the vendor has received for its customer support — A statement from the vendor to commit resources for product training. This could be in the form of training credits at certified third-party training partners, or on-site training in your facility before, during, or after the installation is complete. — A statement of financial standing. This should include a copy of the vendor’s annual report and any other relevant financial data, such as bond rating, debt load, and available cash (in liquid assets and investments). — A list of three to five reference accounts that can be contacted by phone and perhaps visited in person. At least two of these references should be local to your area. The reference accounts should be of similar size, in the same industry as your company, and have implemented similar technology with a like design. — A letter signed by the vendor, certifying the submission of the RFP. This can be a very simple letter that states who the vendor is and for what that vendor is submitting the RFP. This letter should be signed by a manager working for the vendor who is directly responsible for making good on any promises made to you in the RFP response, and by the sales team that will proposing the network to you. This is important because it tells the vendor that you will not only hold that 236

Designing and Provisioning an Enterprise Network vendor to any promises it makes in the RFP, but that you know exactly who to turn to if problems arise later on. — A list of concessions from the vendor that it is willing to make in order to win your business. This list could include items such as future discounts for smaller purchases, ongoing training, a commitment to quarterly visits to assist with any open issues, a commitment to assist your staff in obtaining a level of industry certification on the vendor’s equipment, and other items. Vendors can be very creative in this regard. It is possible for a deal to be won or lost based on the willingness of a vendor to make certain concessions. — A date and time for when submissions must be complete. It is important to be firm on these time limits, as they will apply fairly to all vendors. — A common format that all vendors need to use when submitting their responses to the RFP. This should include a requirement to structure the document using certain formatting rules, and a requirement to submit the RFP both on paper and electronically. It is common to provide each vendor with an electronic template to use when they write their responses. This will ensure a common format, and allow you to more easily compare responses later on. You should also require the vendors to provide you with a certain number of paper copies so you will not have to make copies yourself after the submissions are complete. — A statement from you that says you reserve the right to continue negotiation on the overall pricing after the RFP responses have been submitted but before the project has been awarded to any vendor. This will give you the opportunity to fine-tune the quotes from the vendors once you get to the evaluation stage of the RFP process. Vendors can often meet every criterion to win an RFP, but be off on price. If you feel that you can make the price work, given that all other criteria have been successfully met, you will want to allow yourself the opportunity to further negotiate later on. — A defined means of acceptable communication throughout the RFP process. During the process of responding to an RFP, vendors will have questions. You need to determine what will be an acceptable means of communication with you that can be applied fairly to all vendors. Acceptable means of communication can be any of the following: — Assign one person in your organization as a point of contact through which all communications during the RFP process will be funneled. — Any questions that are posed to your internal contact person will be compiled and sent via electronic mail to all RFP participants, to 237

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE ensure fairness in the RFP process. The questions should be sent to all participants in a timely manner, as should your response. — If you decide toward the end of the RFP process that your vendors need to have more time, grant the same extension to all participants. — Define in advance how you will communicate your decision on the RFP. This could be via e-mail, U.S. mail, or by telephone or voice mail. Most vendors are honest and will treat you and your RFP process with respect. However, some vendors are only concerned about closing the sale, and anything they can do to close the deal is, in their mind, fair game. There are things that you should do during the RFP process to keep the process fair and to protect your company from any semblance of impropriety: • Make sure that you and anyone else involved in the decision process avoid any questionable contact with the vendors during the RFP. During the process, vendors will try many things to curry favor with you. These things include taking you to lunch or dinner, giving you and your staff shirts with the vendor logo on them, offering you side deals that are not part of the RFP, taking you to a sporting event or golfing, or even trying to outright bribe you. It is extremely important, once the RFP process has begun, that you avoid any and all contact with vendors outside the predefined communications guidelines set forth in the RFP. We live in a litigious society, and you will be opening up your company to potential liability if you accept improper gifts from a vendor or facilitate improper contact with a vendor during the RFP. The networking industry is very small, and rumors about actions such as accepting improper gifts get around. A vendor that is not selected could use this against you in a court proceeding, arguing that the process was compromised by your actions. Financial damages could be exacted as a result, and the blame will fall squarely on your shoulders. • Treat all vendors fairly. There is nothing that will gain you a better reputation than treating all vendors during an RFP fairly and with respect. Share all communications between vendors, give all vendors the same concessions, hold all vendors to the same timetables, and grant all extensions equally. SELECTING A VENDOR Once you have completed writing your RFP, submitted the RFP to your selected vendors, managed the submission process, and accepted all of the responses, it is time to select a vendor. There are many different ways to evaluate RFP responses, but there are a few things that should be included in your evaluation method and criteria, regardless of how you 238

Designing and Provisioning an Enterprise Network make your final decision. Strongly consider having each vendor present its RFP in-person in a meeting with your staff and all other decision makers. Sometimes, despite the best of intentions, even the best-written proposals are not always interpreted as intended. Holding an RFP proposal meeting gives respondents an opportunity to have their proposal viewed in both writing and in-person. This way, any subtleties that did not come out in the written response can be pointed out in the meeting, and any questions responded to in an open format. You can assign a grade to each vendor’s presentation and factor this in as an evaluation criterion when making your final decision. Once all vendors have submitted their RFPs and presented their responses in-person, you will need to evaluate the written proposals. To evaluate RFPs, you can use a scoring system similar to how a teacher grades a test. Start with a total point score, and then go through each section of the original RFP template, assigning a point value to each section relative to its importance. (An item of greater significance to you, such as feature compliance, might be of higher value to you than the vendor overview. This means you would give feature compliance a higher point value than the vendor overview.) Once all sections have been assigned a point value, review each RFP response section by section, and grade each vendor’s response with the best responses obtaining the total point value for each section and the worst responses receiving fewer points or no points at all. Once the grading is complete, tally all the scores to see which vendor had the best overall rating. There will likely be things that you will want to use in your decision process that will be more subjective than objective. The most important thing to keep in mind when evaluating RFPs is to apply all evaluation criteria to all vendors equally. With the scoring complete, you will need to lay all of the information in front of you and make your final evaluations. When analyzing all the data, try to keep in mind your original goals for this network design and implementation project. You want to build a new network that will meet the needs of your user community, that will allow for a secure, applicationenabled infrastructure that will scale and grow as your needs change, and that will be completed on a budget. You should keep an open mind on price: Sometimes, a vendor will meet every criterion but miss on price. This is why you included a statement in your RFP that gave you the option to further negotiate with all vendors on price after the RFP responses have been submitted. This will also give you the opportunity to eliminate price as a barrier to choosing one vendor over another. You may often find a particular vendor’s story so compelling that you are willing to pay a premium to use their equipment. 239

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE Once your evaluations are complete, your negotiations have been settled, and your vendor selected, you will need to inform all participants of your decision. Make sure you communicate this decision by way of the method that was determined in advance in your RFP. You can expect the vendor that is awarded the RFP to be quite pleased, and those that lost to be quite unhappy or even upset. The RFP process is very time-consuming for both the author (you) and the participants. It is common to feel let down after a rejection, but you must keep your composure and understand that your vendors are also human. Most will be very professional, but every once in a while you will find a vendor that is less than professional. Simply make a mental note of it and move on. CONCLUSION The process of designing, building, and implementing an enterprise network is full of pitfalls and rewards. Using this chapter as a guide, you will be off to a good start in building your own network. This chapter has provided you with just an introduction to the process of managing a network design and provisioning project. In time, you will bring your own experience and the experience of others to bear as you proceed with this process yourself. There are few experiences in networking that will give you better access to more technologies in such a short period of time than managing a network design, provisioning, and implementation project. Be careful, deliberate, meticulous, and fair, and you will find that your efforts will be greatly rewarded in the end. Recommended Readings Perlman, R. (1999). Interconnections: Bridges, Routers, Switches, and Internetworking Protocols, 2nd edition. Reading, MA: Addison-Wesley, Reading, MA. Thomas, T., Freeland, E., Coker, M., and Stoddard, D. (2000). Designing Cisco Networks. New York: McGraw-Hill, New York.. Oppenheimer, P. (1999). Top Down Network Design. Indianapolis, IN: Cisco Press, Indianapolis, IN.

240

Chapter 20

The Promise of Mobile Internet: Personalized Services Heikki Topi

Public perceptions regarding the importance of wireless Internet access technologies and their role in the corporate IT infrastructure have fluctuated dramatically during recent years. Until early 2000, investors, technology analysts, and corporate IT executives held very positive views on the future of these technologies and the prospects for mobile commerce and other corporate uses of mobile Internet applications. However, the post2000 downturn in the telecommunications industry in particular, and IT industries in general, have led to a more pessimistic view about the speed of innovation in information and communication technology solutions in organizations. As with any new technology, the future will be full of surprises. Nevertheless, it is already clear that wireless access technologies can be integrated effectively into a variety of new applications and services, both within an organization’s internal systems and as part of services targeted to external customers. Innovative organizations that begin now to evaluate the business opportunities with an open approach and willingness to learn will be the best prepared to leverage the benefits from a new infrastructure that enables widespread wireless access to the Internet on truly mobile devices. The objective of this chapter is to assist decision makers in evaluating the current status and likely future of wireless Internet access and mobile commerce technologies. The chapter begins by focusing on those characteristics of mobile services that differentiate them from fixed-line access — including both the limitations of today’s mobile devices as well as the 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

241

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE mobile capabilities that have not yet been fully exploited. Next, the chapter provides a framework for thinking about the business opportunities for real-time and non-real-time personal applications for wireless Internet access: person-to-person and person-to-computer. Finally, the chapter discusses some of key assumptions regarding mobile technology trends that we see as a reasonable basis for decisions about these technologies and provides recommendations for organizations considering the integration of mobile services into their infrastructure. WHAT IS SPECIAL ABOUT MOBILE INTERNET? Some of the most fundamental questions related to wireless Internet access and mobile commerce involve (1) the differences between wireless and fixed-line Internet access methods, (2) the current limitations of today’s mobile devices, and (3) the special capabilities the mobile Internet provides compared to other Internet access mechanisms. This chapter uses the term “wireless Internet access” to refer to technologies that provide access to the Internet using methods that do not require cabling, and the term “mobile Internet” to refer to the use of wireless Internet access technologies to provide Internet-based services without constraints created by wired access. Wireless versus Fixed-Line Internet Access At the surface level, the core elements of the mobile Internet are not fundamentally different from those of the fixed-line Internet. In both contexts, various types of client devices are linked to an access network that, in turn, is connected to a higher-level distribution network, and are eventually routed to the Internet core. In both contexts, servers are accessed through fixed links. Thus, the real technical differences are in the characteristics of the networks providing access to the client devices. Exhibit 1 presents three wireless Internet access technologies on a continuum from personal area networks (PANs) to wide area networks (WANs). With certain types of wireless access technologies, the differences between wireless and fixed-line Internet access are small. Wireless local area network (WLAN) technologies using the 802.11 protocol can be used to provide high-speed (currently mostly 11 Mbps, but increasingly 54 Mbps and soon faster) wireless access within a local area (such as a home, office, or commercial hot-spot). In practice, the only difference between the wireless access technology and fixed LAN access is the nature of the link between the user terminal and the network access point/access device. The Bluetooth wireless protocol offers restricted wireless access for personal use (often called personal area networking). This technology is intended mostly for communication between various mobile devices and their accessories; for example, Bluetooth could be used to link a PDA (per242

7KH,QWHUQHW

7KH,QWHUQHW

['6/&DEOH0RGHP 7FDUULHU 5RXWHU

)DVW(WKHUQHW :/$1 $FFHVV3RLQW

 %OXHWRRWK (QDEOHG3KRQH

3'$RU/DSWRS

*DWHZD\*356 6XSSRUW1RGH *356 %DFNERQH 6HUYLQJ*356 6XSSRUW1RGH %DVH6WDWLRQ 6XEV\VWHP

%OXHWRRWK &HOOSKRQH3'$RU/DSWRS

3'$

Wireless Protocols Extent of Access 243

Exhibit 1.

%OXHWRRWK

E :L)L D J

&'3' *356 H[DPSOHDERYH &'0$[ 8076:&'0$ &'0$[

3HUVRQDO

/RFDO

:LGH

Three Examples of Wireless Internet Access Technologies on a Continuum

The Promise of Mobile Internet: Personalized Services

*356 LQIUDVWUXFWXUH VHHULJKWPRVW FROXPQ

7KH,QWHUQHW

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE sonal digital assistant) to a cell phone that provides packet-switched Internet connectivity. However, providing uniform, reliable wireless access for wide geographical areas (using, for example, CDPD, GPRS, or UMTS) is a much more complex task than providing WLAN access at a single location. Complex technical solutions are required to ensure smooth hand-offs from one base station to another, roaming between the networks of different service providers, efficient management of available frequencies, maintaining connections over long distances between terminals and base stations, providing accurate location information, providing carrier-class wireless security, and customer management and billing. In these environments, special protocol families (such as WAP) are still often used for carrying the packet data on the mobile part of the networks. Exhibit 1 also includes a diagram describing a simplified version of the GPRS architecture as an example of wireless WAN access to the Internet. Current Limitations of Mobile Technologies At this time and in the foreseeable future, mobile access devices and wireless networks also have some inherent disadvantages. True mobility unavoidably requires that an access device is physically small, both in terms of weight and external dimensions (that is, it can be carried unobtrusively in a pocket). With current technology, this means that it has a small display and a slow data entry mechanism. Further, mobile devices are normally battery powered. With current technology, this means the need to limit the power consumption of various components and to take issues related to power into account in all design decisions. Also, the bandwidth available for the communication between mobile devices and their respective base stations is currently constrained by the limitations set by radiofrequency technologies. It is highly likely that the capacity disadvantage of wireless links compared to fixed-line connections is here to stay. In addition, wireless links are less reliable and less secure than fixed-line connections. When these limitations are taken into account, it is obvious that wireless Internet access devices have to offer something qualitatively different from fixed-line access before they will be widely adopted. Neither consumers nor corporate users are likely to choose a mobile interface to an application, over an interface based on a fixed-line connection, unless the mobile solution provides some clear advantages that outweigh the limitations outlined above. Potential Benefits of Wireless Applications Successful mobile Internet applications must, in one way or another, be based on the most important advantages of mobility, such as the following. 244

The Promise of Mobile Internet: Personalized Services Always Available, Always Connected. A user can, if he or she so chooses, have a mobile access device always available and always connected to the network. Consequently, a user can continuously access information and act very quickly based on information received, either from current physical environment or through the wireless device. A potentially very significant advantage is that wireless access devices provide ubiquitous access to any information resources available on the Internet and corporate (or personal) extra- and intranets. With well-organized databases, proper user interfaces, and highly advanced search and retrieval tools adapted to the mobile environment, this may lead to significantly improved performance in tasks that depend on the availability of up-to-date factual data or the ability to update organizational information resources in real-time. One challenge facing corporate users is that of finding applications where mobility genuinely makes a difference. In many cases, the availability of real-time information does not provide meaningful benefits because no user action is dependent on it. Among the first employee groups to benefit from applications built using mobile Internet technologies are sales and service personnel who can receive up-to-date data regarding the customers they deal with or internal corporate data (such as inventory status or product configuration information), and who also can continuously update the results of their actions in a centralized database. This means that other employees communicating with other representatives of the same client company or managers interested in receiving continuous updates regarding a particular situation are able to follow the development of relevant events in real-time.

Wireless access provides new opportunities for highly accurate and timely data entry. As long as the infrastructure provides sufficient but still transparent security, mobile devices with Internet access can be used as universal interfaces to enterprise applications that require data entry as an integral part of an employee’s work. Thus, it may become significantly easier to collect data where and when a real-world transaction occurs. Unlike portable devices without network connectivity that enable only data collection for further batch processing, continuous wireless connectivity makes it possible to update the data in organizational systems in real-time. Personal Services and Support. Wireless Internet access devices (or at least their identification components, such as GSM SIM cards) are more personal than any other terminals that provide network connectivity. Thus, they have the potential to be extensions of users’ personal capabilities in a much stronger way than any other access device type. Both cellular handsets and PDAs are characteristically personal devices linked to one individual, and whatever the dominant form factor(s) for wireless Internet access devices will eventually become, it is highly likely that these devices will be equally personal. This is a significant difference compared 245

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE to the fixed-access world where computers are often shared.1 With appropriate software, organizational systems utilizing mobile terminals can provide every employee personalized support in their tasks wherever and whenever they happen to be performing them. The desired support level (e.g., instructions needed for performing maintenance tasks or foreign language support) can either be chosen by the user or automatically set by the application that adapts based on the user’s past actions. Similarly, a company can provide its customers with highly personalized product support or other services through a mobile terminal. In practice, this also means that both a user’s and any other party’s ability to collect data about the user and the user’s actions is significantly better than with fixed-access devices. Thus, it is possible for a device to not only adapt to its user but to also provide highly personalized services based on the data collected during previous usage and the current context. This, of course, creates potentially significant privacy problems if the collected data is used in an inappropriate way. Context-Specific Services and Support. A mobile access device can identify its location using a variety of technologies with an ever-increasing accuracy (with an integrated Global Positioning System (GPS) receiver, the error can be as little as 30 feet). This creates an opportunity for companies to offer services based on an integration of three factors: location, time, and a user’s identity. Any two of these factors alone are already enough to provide highly personalized services, but when they are combined, the usefulness of the contextual information increases significantly. Unfortunately, privacy and security issues become more difficult at the same time. This is, nevertheless, an area where corporations have an opportunity to offer highly context-specific value-added services to their customers. It is, however, a true challenge to find mechanisms to communicate with customers using wireless devices so that they feel that the information they receive from the interaction is valuable for them and that the communication targeted to them is not intrusive. Many early experiments with mobile marketing suggest that approaches in which customers are given an incentive to initiate the communication — for example, by inviting them to participate in sweepstakes with their mobile device (the pull approach) — are more successful than sending unsolicited advertisements to them (the push approach), even in countries where the latter is not illegal.

A FRAMEWORK OF PERSONAL USES OF A MOBILE INTERNET Exhibit 2 presents a framework of the potential personal uses of the mobile Internet along two dimensions: (1) the mobile Internet enables communication either between people (person-to-person) or between a person and a computer or “smart” device (person-to-computer), and (2) a specific use 246

The Promise of Mobile Internet: Personalized Services Exhibit 2.

Uses of Mobile Internet Technologies Real-Time

Non-Real-Time

Calls Messaging • Basic messaging • Basic voice calls • Rich messaging, including e-mail and • Rich calls — video, still fax access images, audio • Access to electronic conferencing • Application sharing Info Retrieval and Applications Person-to-computer • Use of multimedia libraries • Access to intranets and internal applications • Presentation support • Sales support • Sales support • Training/HR development • Service/maintenance support • Retail sales interface for customers • Automated customer • Customer service support Person-to-person

of wireless Internet access technologies may need either real-time or nonreal-time communication.2 Person-to-Person Both research and anecdotal evidence suggest that flexible applications and services that can link end users directly to each other, either synchronously (in real-time) or asynchronously (in non-real-time), will at least initially be the most common uses of wireless Internet technologies. Basic synchronous voice service will, in all likelihood, maintain its position as the most popular communication mode, augmented with video and other rich call elements when they become technologically and economically feasible. The richness of synchronous mobile calls will increase, but most likely in the form of an exchange of shared application elements and multimedia data simultaneously with a regular voice call. Application sharing requires, of course, interoperability between application versions on different access devices and adaptability of applications to different terminal environments. Video communication is widely used as an example of a future technology in the marketing materials of wireless equipment manufacturers and operators and, therefore, a brief discussion of this topic is warranted. Both experience with and research on desktop videoconferencing suggest that, particularly when the focus is on the task(s) (and not on, for example, learning to know each other), the video component is the first to go if desktop space is a constraint. This will likely be true even more when display size is very limited, as it unavoidably will be at least for the first few years of availability of real-time wireless videoconferencing. One of the questions that will remain important for the managers responsible for corporate information and communication infrastructure is the cost effectiveness of 247

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE mobile video, which will likely be a premium service for a significant period of time. Recreational and personal uses of mobile videoconferencing might eventually become significantly more popular than professional uses. Linking individuals and groups to social events across geographical boundaries is an attractive idea if the quality of the technology is sufficient. In addition to the real-time calls, messaging is and will continue to be another approach to person-to-person communication — an approach that does not require a real-time connection between participants, such as email in the fixed Internet environment and short messages (SMS) that have become very popular particularly among GSM users in Europe and in parts of Asia. The richness of messages will also increase with multimedia components, such as images and video clips; the first commercial multimedia messaging services (MMS) were launched in the summer of 2002. It is easy to see the advantages and the attractiveness of being able to access one’s messages without the limitations of time and place, whatever the technical mechanism with which they have been created. With the introduction of true mobile Internet, mobile handsets are likely to become one of the primary mechanisms used to access one’s messages, irrespective of the technology with which they were originally transmitted. From the corporate perspective, the effects are twofold: (1) an increasingly large number of mobile phones will allow easy and affordable access to e-mail (including attachments); and (2) particularly in geographical areas where one mobile standard dominates (e.g., GSM/GPRS/WCDMA in Europe), rich messages that are exchanged directly between mobile terminals will become a business tool, especially if mobile handsets will be equipped with improved displays and versatile input devices (e.g., digital cameras, pen scanners) as has been predicted. The true value added by the two usage types described above (rich mobile calls and messaging) is an extension of the already-existing modes of communication to new contexts without the limitations of time and place. The fundamental nature of these services appears to stay the same, but the new freedom adds value for which consumers and corporations are probably willing to pay a premium, at least in the beginning. For a corporation that is considering the adoption of mobile technologies, the implications are clear. As shown, for example, by cellular phone usage, corporate users will adopt new communication technologies and make them part of their everyday toolkit if they perceive them as personally convenient, and it will be up to management to decide whether or not boundaries are needed for the usage of the premium services. The only questions are the timing of adoption and the development of appropriate policies by organizations. 248

The Promise of Mobile Internet: Personalized Services Mobile rich calls and messaging will be standard communication technologies used by the great majority of corporate users soon after technologically viable and reasonably priced options become available. This is, however, hardly surprising, and it is difficult to see the opportunities for the creation of true economic value compared to other companies within an industry. These technologies will be offered as a standard service and, as such, any innovative usage is easy to copy. First movers will probably have some advantage, but it is unlikely to be sustainable. The best opportunities for the development of new applications can probably be found by integrating rich messaging with intelligent back-office solutions, which capture, maintain, and utilize data about the messaging communication. Person-to-Computer Truly innovative applications are likely to be developed for person-to-computer services, which include information retrieval and interactive applications. These may require either real-time or non-real-time delivery of data from the servers that provide the service. Unlike the first two methods, which are person-to-person infrastructure services, the usage of which is universally the same, the use of mobile networks as an application platform is just a starting point for the development of both tailored and packaged applications that utilize mobile access as one of the fundamental features of the infrastructure. Nobody will be able to reliably predict what the most used and most profitable applications and services will be until they have been introduced and have stood the test of time. This, naturally, means that experimentation is essential. Without experimentation it is impossible to find new, profitable applications and create new innovations. Thus, it is essential to create a network of partners to allow for breadth in experimentation. THE FEATURES OF WINNING PERSONAL APPLICATIONS Although the success of any single mobile application cannot be predicted with any certainty, we can use the characteristics discussed above that differentiate the mobile Internet from the fixed-line Internet as a basis for evaluating the characteristics of the applications that have the potential to be successful. Successful mobile applications are not applications that have simply been ported to mobile terminals from the fixed-line environment. Powerful, productivity-enhancing mobile applications are likely to have the following features: • They are highly context specific and adapt to the user, the location, and the task for which they are used. Context specificity is one of the fundamental strengths of mobility, and high-quality mobile applications must be built with this in mind. 249

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE • They adapt to the characteristics of the terminal devices and the network being used. Wireless access devices, like mobile phones, are very personal choices, and it is unlikely that we will see convergence toward any one device type. • They bring relevant aspects of fixed-line applications to the mobile environment. Instead of simply offering all the functionality of traditional desktop applications, successful mobile applications implement a relevant subset of functionality that is specifically chosen for the mobile context. • They are fast and easy to use. In many cases, a user of a mobile application is literally moving and can pay only limited attention to applications. Complex command sequences or navigational structures therefore prevent their use. • They allow users to perform simple, well-defined tasks that directly enable the users to improve their productivity, such as retrieving data from a catalog, getting instructions for performing a task, or comparing alternatives. • Most applications can become successful only if they are willingly adopted by a large number of people, and the adoption rates of applications with complex user interfaces have never been particularly high. • They increase a user’s sense of belonging to a group or community while simultaneously emphasizing his or her individual characteristics. This is true particularly with consumer applications, but it can also be utilized within a corporate setting. Many successful early applications utilizing SMS technology (e.g., the downloading of ring tones in Europe) give the users a chance to identify and communicate with a specific group and at the same time convey an individualistic message. TECHNOLOGY TREND ASSUMPTIONS Decisions about investments in mobile applications and services should be made based on the following assumptions regarding technology trends: • Decreasing cost of bandwidth. The cost of bandwidth for accessing Internet resources through mobile devices will continue to decrease. For example, currently in the United States, unlimited packet-based wireless access to the Internet costs $40 to $50 per month at the basic capacity level (in 2002, 20 to 40 kbps). It is likely that the cost for the user will remain approximately the same while the data rates continue to increase. Particularly in the United States, widespread acceptance of mobile technologies is likely to be possible only with plans that offer unlimited access because most potential adopters are accustomed to this pricing model with fixed access. Pricing models will likely vary 250

The Promise of Mobile Internet: Personalized Services









globally, depending on the region. The penetration rates for mobile Internet devices have the potential increase at a very rapid pace if consumers perceive pricing models (covering both terminal devices and services) to be such that they provide fair value. Overhyped wireless data rates. At the same time, it is important to note that the increase in real data rates will not be as rapid as predicted. Data rates that have been promised for 2.5G (GPRS) and 3G (UMTS)3 technologies in the marketing literature are up to 180 kbps and 2Mbps, respectively. It is highly unlikely, however, that an individual user will ever experience these rates with these technologies. According to some estimates, realistic data rates are around 50 kbps for GPRS and 200 kbps for UMTS by 2005.4 This, of course, has serious implications from the perspective of the services that realistically can be offered to the users. Immediate, always-on access. Widespread adoption of any Internet access technology requires access that is always available without lengthy log-on procedures. Products based on circuit-switched technologies that establish a connection at the beginning of the session will be unlikely to succeed, as the example of WAP (Wireless Application Protocol) using circuit-switched technologies has shown. For example, the packet-switched approach was one of the major advantages of i-mode over the early WAP. All new 2.5G and 3G technologies provide mechanisms for packet-switched communication. Technologically, this is based on an increased utilization of the Internet Protocol (IP) on mobile networks. New network architectures are separating intelligent services that will take place at the network edge from the fast packet-forwarding capabilities of the core networks. Widespread availability of mobile access. The geographical area within which mobile Internet service is available is likely to continue to grow. In the United States, it is unlikely that ubiquitous availability will be achieved at the national level because very large, sparsely populated areas make it unprofitable; but at least in Western Europe (including the Nordic countries), there will be few areas without coverage. We will see several waves of capacity increases, which will invariably and understandably start in large metropolitan areas and gradually make their way to smaller cities and, from there, to rural communities. This pattern is likely to be repeated at least twice during the next ten years — first with 2.5G technologies such as GPRS, GPRS/EDGE, and CDMA2000 1x and then with 3G technologies such as WCDMA and CDMA2000 3x. The speed of 3G deployment will depend on the financial status of operators, the success of 2.5G technologies, consumer demand for fast-speed access, and regulatory actions. More than one global access standard. In the foreseeable future, there will not be one clearly globally dominant technology for providing 251

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE wireless Internet access despite the efforts of the ITU-T and other global standardization organizations. The technology development path is defined most clearly for those regions where GSM is currently the dominant 2G technology (Europe and Asia except Japan). In these areas, GPRS is a natural 2.5G step in the direction of WCDMA as the 3G technology. In the United States, however, the proliferation of access technologies currently prevalent in the 2G markets seems likely to also continue in the future. Not only are operators using 2G CDMA technology likely to migrate to CDMA2000 1X/3X networks and operators using GSM or TDMA technologies toward WCDMA through GPRS/EDGE, but the market also has a large number of players whose services are currently available and based on lesser-known and slower packet-based technologies such as Mobitex (Cingular Wireless) or DataTAC/Ardis (Motient). In addition, WLAN technologies (currently the 11 Mbps 802.11b and soon the 20–54 Mbps 802.11g and 54 Mbps 802.11a) using unlicensed frequencies have recently received a lot of attention as a possible alternative for 3G technologies — particularly for data — in limited geographical areas with high user densities. As a result, at least at the present time, the migration plans toward higher speeds are not clearly specified in the United States. Even in Europe, WLAN technologies may cause interesting surprises to the operators that have invested heavily in 3G licenses. Fortunately, providers and users of mobile services are not limited to one operator or to one access technology, but they should be aware of the potential technological constraints created by the dominant access technologies in particular geographical regions. Service providers and companies developing in-house solutions therefore need to design their services so that they are usable from a variety of devices and take the special characteristics of each end-user access device into account. Future terminal devices must be able to use multiple access technologies from different providers without dropping a service when the client moves from one access mode to another, and future applications must be able to adapt to variable data rates. • Consumer needs still rule. The underlying technologies are irrelevant to most consumers, and all companies designing and marketing consumer services should keep this clearly in mind. Several times during the history of mobile services, we have seen cases in which a product that seems to be technologically inferior has become dominant because of seemingly irrelevant features that consumers prefer. Often, apparently meaningless services have become hugely successful because of a feature or a characteristic, the appeal of which nobody was able to predict. This is also likely to happen in the area of the mobile Internet and mobile commerce. The most successful i-mode services in Japan and short message (SMS) services in Europe have included 252

The Promise of Mobile Internet: Personalized Services service types for which major commercial success would have been very difficult to predict (e.g., virtual fishing in Japan or downloading of various ring tones in Europe). • 3G is not the end of development. Technological development will certainly not end with 3G. It is likely that 4G solutions will provide seamless integration and roaming between improved versions of technologies in each of the current main access categories: personal area network technologies (Bluetooth), wireless local area networks (802.11 protocols), and wide area network access technologies (GPRS, WCDMA, CDMA2000). 5G technologies are already under development, and some observers predict that they may replace 3G as soon as 2010. Based on the recent experiences with 3G deployment, however, delays are likely. RECOMMENDATIONS Based on the above discussion of the differences between mobile and fixed-access Internet technologies, the potential uses for the mobile Internet, and assumptions about the development of mobile communication technologies, several recommendations can be made to organizational decision makers. • Development of wireless handsets and infrastructure is still in its infancy, yet the importance of understanding mobility and providing relevant mobile capabilities for the members of an organization is intensifying. Learning how mobility will change the links between an organization and its employees, customers, suppliers, and other stakeholders will take time and require experience; thus, it is important to start now. • Understanding the unique and sometimes non-intuitive characteristics of mobility and applications utilizing wireless access technologies will require individual and organizational learning. Some of the most difficult issues are related to the development of adaptive user interfaces that provide the best possible user experience in a variety of access environments. The challenges are greatest in environments where the constraints are most severe; that is, on the physically smallest mobile devices. • Few organizations will have all the capabilities needed to develop the best possible applications for every purpose. Partnering and various other collaborative arrangements will be just as, if not more, important in the mobile Internet context as in the fixed Internet environment. • The introduction of mobile terminals and applications is, as any other technology deployment project, a change management project and should be seen as such. This is particularly important in multicultural 253

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE environments in which cultural factors may strongly affect people’s perceptions and expectations. • Mobile Internet technologies introduce serious issues regarding security and privacy. These issues must be resolved satisfactorily before widespread deployment. The use of truly mobile devices is very personal, and users want to keep it so. Users’ perceptions matter as much as reality. • Global confusion regarding the access technologies is likely to continue, although there are good reasons to believe that future technological integration will be based on IPv6 at the network layer and vendorindependent technologies such as Java and XHTML at the application layer. Together, these form a sufficient standards platform for non-proprietary applications to succeed as long as terminal devices can effectively utilize multiple radio access technologies and connect to multiple service providers. • Expect a fast-moving target. The technology infrastructure will continue to improve quickly, as will the capabilities of the access devices. CONCLUSION It is essential that decision makers begin to evaluate the capabilities and the promise of various mobile access technologies and mobile applications. Systems that utilize wireless Internet access technologies will create new business opportunities, many of which will be highly profitable for those corporations that are best positioned to take advantage of them. Notes 1. Shared devices can, of course, be personalized in networked environments by utilizing the capabilities of network operating systems, but the level of personal attachment between a user and a terminal device is closer if the device is used exclusively by one person. 2. Nokia’s MITA architecture (Mobile Internet Technical Architecture — The Complete Package, IT Press, Helsinki, 2002) divides the uses of mobile communication infrastructure into three categories in terms of the immediacy needs: rich call, messaging, and browsing. 3. A typical notation that is used to refer to different generations of mobile technologies uses 2G (the second generation) for current digital TDMA, CDMA, and GSM technologies, 2.5G for packet-based technologies that were originally intended to be intermediate technologies between 2G and 3G such as GPRS, and 3G for next-generation, high-speed, packetbased technologies such as WCDMA and CDMA2000. 4. Durlacher Research & Eqvitec Partners (2001). UMTS Report. An Investment Perspective. Available at http://www.tuta.hut.fi.

Sources 1. 2. 3. 4.

www.3gpp.org www.3gpp2.org www.cdg.org www.gsmworld.com

254

The Promise of Mobile Internet: Personalized Services 5. 6. 7. 8.

www.i-mode.nttdocomo.com http://standards.ieee.org www.umts-forum.org www.wapforum.org

GLOSSARY Bandwidth: Formally refers to the range of frequencies a communications medium can carry but, in practice, is used to refer to the data rate of a communications link. Measured in bits per second (bps) and its multiples (kbps = kilobits per second = 1000 bps, Mbps = megabits per second = 1,000,000 bps) Circuit-switched: Refers to transmission technologies that reserve a channel between the communicating parties for the entire duration of the communication event. Used widely in traditional telephony, but is relatively inefficient and cumbersome for data transmission. CDMA: Code Division Multiple Access is a radio access technology developed by Qualcomm that is widely used for cellular telephony in the United States and in South Korea. CDMA2000: One of the international radio access standards that will be used for 2.5G and 3G mobile telephony. CDMA2000 3x will provide significantly higher speeds than the transition technology CDMA2000 1x. CDPD: Cellular Digital Packet Data is a 2G packet-switched technology used for data transmission on cellular networks mostly in the United States that provides 19.2 kbps data rates. Data rate: The number of bits (digital units of transmission with two states represented with a 0 or a 1) a communications channel can transmit per second. EDGE: A radio-access technology intended to make it possible for telecom operators to provide 2.5G mobile services utilizing the radio spectrum resources they already have. It can be used by both TDMA and GSM operators. GPRS: General Packet Radio Service is a mechanism for sending data across mobile networks. It will be implemented as an enhancement on GSM networks, and will provide at first 20 to 30-kbps and later 50 to 70-kbps access speeds using a packet-switched technology. Therefore, it can provide an always-on environment in which users are not charged for connection time, but instead either for the amount of data transmitted or a flat fee per month. It is considered a 2.5G mobile technology. GSM: A 2G cellular standard used widely in Europe and in Asia but relatively little in the United States. It is used for voice, short messaging (SMS), and for circuit-switched data. 255

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE i-mode: A set of mobile Internet access services offered on the network of the Japanese cellular provider NTT DoCoMo’s network. Most of the services have been designed to be accessed using specific i-mode handsets. IP: Internet Protocol. The network layer protocol that forms the foundation of the Internet and most current internal data networks. ITU, ITU-T: International Telecommunications Union and its Telecommunication standardization section. An international organization that coordinates international telecommunications standardization efforts. Packet-switched: Refers to transmission technologies that divide the content of the transmission into small packets that are transmitted separately. Mostly used for data but increasingly often also utilized for audio and video transmission. SMS: Short Message System is a mechanism for sending short text messages (originally 160 characters long) between cellular phones. Originated on GSM networks but services can now be found on all digital network types (GSM, TDMA, CDMA). TDMA: Time Division Multiple Access is a radio-access technology for mobile telephony that is widely used in the United States. GSM is based on the same basic idea as TDMA but still requires different equipment. UMTS: Universal Mobile Telecommunications System is one of the major 3G technologies that utilizes WCDMA as its radio access technology. Telecom operators paid more than $100 billion for UMTS licenses in Germany, Great Britain, and Italy in auctions that were organized in the summer of 2000. In the fall of 2002, only experimental UMTS networks were operational. WAP: Wireless Application Protocol is a set of standards for delivering information to mobile devices and presenting it on them. The first versions of WAP suffered from serious practical implementation problems, and WAP services have not yet reached the popularity that was originally expected. WCDMA: The radio access technology that will be used in UMTS. WLAN: A Wireless Local Area Network provides wireless access to local area networks (LANs) through wireless access points. IEEE 802.11 working groups are responsible for the development of standards for WLANs.

256

Chapter 21

Virtual Private Networks with Quality of Service Tim Clark

Virtual private networks (VPNs) have certainly received their fair share of press and marketing hype during the past few years. Unfortunately, a side effect of this information overload is that there now appears to be a great deal of confusion and debate over what the term “VPN” means. Is it synonymous with IPSec? Frame Relay? Asynchronous Transfer Mode (ATM)? Or is it just another name for remote access? If one were to take a quick survey of VPN information on the Internet, one would have to answer, “yes, it applies to all these technologies” (see Exhibit 1). This chapter first offers a definition that encompasses all these solutions. It then specifically discusses where the convergence of voice, video, and quality of service (QoS) fits in to this brave new VPN world. Finally, the discussion focuses on what offerings are deployable and testable today and what evaluation methodology to use in deciding which solution works best for a particular network. DEFINING VIRTUAL PRIVATE NETWORKS One way to define any technology is to examine it in terms of the services it provides to the end user. In the world of high technology, these poor souls are too often left scratching their heads, telephone in hand, waiting on the IT help-line for someone to explain to them why what worked yesterday in the network does not work today. VPNs offer the end user primarily three key services: 1. Privacy 2. Network connectivity 3. Cost savings through sharing 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

257

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE

Exhibit 1. IP, ATM, and Frame Relay Networks Can Support VPNs

Privacy VPNs promise the end user that his data is his own, it cannot be stolen, and his network is safe from penetration. This is implemented in a couple of ways. Sometimes, a network provider may simply convince his customer that his network service is secure and that no further security measures are required. This is often the case in Frame Relay or ATM-based networks where connections are permanent and are established via the service provider’s network management interface. This, however, can be a risky proposition in that few, if any, service providers offer security guarantees or penalties in their service level agreements. Frame Relay and ATM have no inherent security designed into their protocols and are, therefore, susceptible to a number of attacks. Security solutions for Frame Relay, ATM, and IP networks are available and implemented either by the customer or service provider via router software or a stand-alone product that provides security services. The security services that these products provide typically include: • • • • 258

Private key encryption Key exchange Authentication Access control

Virtual Private Networks with Quality of Service

Exhibit 2.

A System of Public, Private, and Session Keys Protects Information and Maintains Performance

Private key encryption utilizes cryptographic algorithms and a shared session key to hide data. Examples of widely available private key encryption schemata are Digital Encryption Standard (DES), Triple DES, and IDEA. Because both sides of a network connection share the same key, there must be some way to exchange session keys. This is where a key exchange protocol is required (see Exhibit 2). Key exchange protocols usually involve two keys: a public key that is used to encrypt and a private key that is used to decrypt. Two sides of a network connection exchange public keys and then use these public keys to encrypt a session key which is exchanged and decrypted using the key exchange protocol’s private key. Key exchange is susceptible to man-in-the-middle attacks and replay attacks, which involve an attacker pretending to be a trusted source. Authentication prevents this by insuring the source. When public keys are exchanged, the source is verified either by a shared secret or via a third party called a Certificate Authority. Access control typically rounds out a VPN product’s feature set. Its primary purpose is to prevent network penetration attacks and ensure that only trusted sources can gain entry to the network. This is be done by defining a list of trusted sources that are allowed access to the network. Network Connectivity Network connectivity relates to the ability of the nodes in a network to establish connections quickly and to the types of connections that can be established. Connection types include (see also Exhibit 3): • One-to-one • One-to-many • Many-to-many 259

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE

Exhibit 3.

Primary Types of Network Connectivity

IP networks provide the highest degree of connectivity. IP is a connectionless protocol that routes from site to site and establishes connections via TCP very quickly. With IP, one can create one-to-one, one-to-many, or many-to-many connections very quickly. This makes it ideal for Internet and retail commerce applications where businesses are connecting to millions of customers — each hopping from Web site to Web site. ATM switched virtual circuits (SVCs) offer a lower degree of connectivity than IP network services. ATM is a connection-oriented protocol and requires that a signaling protocol be completed prior to connection establishment. This usually takes milliseconds, but is substantially slower than connections in an IP cloud (not running Resource Reservation Protocol [RSVP] or Differentiated Services [Diff-Serv]). Additionally, ATM requires some rather complex configurations of protocols such as LANE, PNNI, or MPOA. ATM SVCs support one-to-one and one-to-many connections, but manyto-many connections are difficult and require multiple SVCs. This makes ATM SVCs ideal for intranet applications such as large file transfers, video catalogs, and digital film distribution, where connections last minutes instead of seconds and tend to be one-to-one or one-to-many. In the case of Frame Relay and ATM permanent virtual circuits (PVCs), connectivity is low. While one-to-one and one-to-many connections are supported, the network service provider must establish channels. This can take anywhere from a few minutes to a few days, depending on the service provider. For this reason, these services are permanent in nature and utilized to connect sites that have round-the-clock or daily traffic. 260

Virtual Private Networks with Quality of Service

Exhibit 4.

Shared Circuits Use Bandwidth More Efficiently

Cost Savings through Sharing Just like your kindergarten teacher told you, sharing is a good thing. The “virtual” in VPN is due to the fact that, in VPNs, bandwidth is shared (see Exhibit 4). Unlike dedicated circuits in which one owns a certain bandwidth across the network whether one happens to be using it or not, VPNs allow one to share bandwidth with peers on the network. This allows the service provider to better engineer his network so that bandwidth is utilized more efficiently. Service providers can then pass substantial cost savings to their customers. Estimates for cost savings on switching from a dedicated network to a shared bandwidth network can be as much as 20 to 40 percent. The key to sharing bandwidth is that it be fair and this is why QoS is essential. QUALITY-OF-SERVICE (QoS) A survey by Infonetics listed QoS as the second leading concern of IT managers, behind security in its importance in their network design decisions. QoS is the ability of a network to differentiate between different types of traffic and prioritize accordingly. It is the cornerstone of any convergence strategy. Voice, video, and data display very different traffic patterns in the network. Voice and video are very delay dependent and have very predict261

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE

Exhibit 5. Required Bandwidth is the Sum of All Components

able patterns, while data is very bursty and is less delay sensitive. If all three types of traffic are put on a network, the data traffic will usually interfere with voice and video and cause it to be unintelligible. A good example of what convergence is like without QoS would be to use an Internet phone during peak traffic hours. You will not like it. One thing that QoS does not do is guarantee that data traffic will get from node A to node B under congestion conditions. It does not prioritize one company’s traffic over another company’s traffic if they are the same type. It merely prioritizes delay-sensitive traffic over traffic that is not delay sensitive. The only way a service provider can guarantee traffic will get from node A to node B is by designing the network’s bandwidth capacity so that it can handle worst-case congestion conditions (see Exhibit 5). Be aware that many carriers that offer QoS overbook their networks. ATM QoS ATM networks have the advantage of being designed from the ground up for QoS. ATM networks offer Constant Bit Rate, Variable Bit Rate, Variable Bit Rate Real-time, Available Bit Rate, and Unspecified Bit Rate as Classes of Service within their QoS schemata. Additionally, the ATM Security Forum has defined security as a Class of Service. Access control, encryption, data integrity, and key exchange are all defined and integrated with QoS into a nice, interoperable package. 262

Virtual Private Networks with Quality of Service IP QoS An acceptable standard for QoS within an IP-based network is still very much a work in progress. The Internet Engineering Task Force (IETF) has three standards: RSVP, Diff-Serv, and Multi-Protocol Label Switching (MPLS). RSVP RSVP was the first attempt at a universal, full-feature standard for IP QoS. However, based on its inability to scale, it failed to gain acceptance within the community. RSVP requires that all routers within the network maintain state information for all application flows routed through it. At the core of a large service provider network, this is impossible. It has found a small place in enterprise networks and in PVC-like applications. In these applications, flows are not set up by the end user, but by the network management system. Flows consist of aggregated traffic rather than specific host-to-host application. Differentiated Services (Diff-Serv) Diff-Serv is the IETF’s attempt at a solution that scales. It is more modest in its QoS offerings than RSVP. Diff-Serv groups individual traffic flows into aggregates. Aggregates are serviced as a single flow, eliminating the need for per-flow state and per-flow signaling. Within the Layer 3 IP header is a section designated for the Diff-Serv Code Point. Routers and IP switches use this mark to identify the aggregate flow to which a packet belongs. DiffServ does not supply per-application QoS. Instead, it is utilized to guarantee that the service level agreement (SLA) between a customer and a service provider is kept. By setting the Diff-Serv Code Point to a specific typeof-service (ToS), the customer designates the class of service required. Assuming that the network is engineered so that the sum of all SLAs can be met, the customer’s data traffic can be guaranteed to arrive at its destination within the delay, throughput, and jitter tolerances specified in the SLA. MPLS MPLS is the up-and-coming favorite in the search to provide a standard that includes QoS and security for IP-based networks. MPLS has the advantage that it operates across and binds in a number of Layer 2 protocols, including Frame Relay, ATM, PPP, and IEEE 802.3 LANs. MPLS is a method for IP routing based on labels. The labels are used to represent hop-by-hop or explicit routes and to indicate QoS security, and other information that affects how a given type of traffic is prioritized and carried across a network. MPLS combines the routing — and any-to-any connection capabilities within IP — and integrates them with Layer 2 pro263

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE tocols. For example, within ATM, IP flows are labeled and mapped to specific VPI/VCIs. The IP QoS types are mapped to ATM QoS types, creating a QoS environment where Layer 2 and Layer 3 are cohesive. Enabling any-toany connections within ATM and Frame Relay allows service providers to connect the many nodes of a single customer’s network by specifying a single connection rather than the one-to-many connections that are normally required. MPLS is somewhat new, and a number of issues will arise — its interoperability with IPSec, for example. It is a work in progress. Thus, depending on one’s immediate needs and how much blood one is willing to shed for the cause, one may want to wait until the dust settles before implementing an MPLS network. EVALUATION METHODOLOGY In evaluating any VPN networking solution or carrier service offering, testing is crucial. I cannot emphasize enough the importance of testing. The progress of the technological society can be traced back to the philosophies of Rationalism and Empiricism. Reliance, not on authorities in the field but on empirical evidence as revealed by sound evaluation methodology, is key to designing a networking solution that will fit one’s needs and grow as one’s requirements grow. Research the Technology One cannot evaluate a technology without some understanding of its underpinnings. Ignore the hype. Hype’s sole purpose is to try and sell you something. Do not waste time on chapters discussing “technology wars.” This is the media’s way of trying to make a rather dry subject interesting. The decision as to what is needed is going to be based on the quality of information that has been gathered. Read books, take classes, get some hands-on experience with the different VPN technologies. Decide for yourself what technology is the “winner.” Seek out your peers. Have they already been through an evaluation process? What were their evaluation criteria? What applications are they planning on running? Remember, the Internet started out as a place for “techies” to share information and help each other solve problems. The time spent researching the technology is worth the cost. Define the Criteria Application Traffic. The first step in evaluating a VPN solution is to fully understand the traffic characteristics of the applications that are running or will be running on the network for at least the next three years. If planning to integrate voice and video, take the time to understand what these applications are going to do to the network’s traffic. If not involved in planning the applications that the company will be utilizing, get involved. One 264

Virtual Private Networks with Quality of Service cannot plan a network without understanding what will be running on it. Far too many IT organizations are in pure reactionary modes, in which resources are absorbed in fixing problems that could have been avoided by a little planning. Technology can solve a lot of problems but it can create more if sufficient planning has not occurred. Security. Defining a company’s security requirements is purely a matter of discovering the acceptable risks. What value does the company place on its data? In a worst-case scenario, what damage could be done by releasing confidential information? Who are you protecting your information from? Try to use scenarios to educate company management on the cost and value of data security.

In the age of the Internet, it is a safe assumption that any traffic sent out over a public network is fair game. Even separate networks like ATM and Frame Relay are subject to attack from the Internet. Remember, security is only as strong as its weakest link. If a company with poor security has a connection to both the Internet and a Frame Relay or ATM public network, a network attacker can use that company’s network to access its Frame Relay or ATM network. Unless one is trying to cure insomnia, do not get bogged down in discussions of cryptography. In most cases, open standards such as Triple DES or IDEA will provide an acceptable level of security. If someone needs this year’s model of supercomputer versus last year’s model to decode your data, does it matter? Unless you are the Defense Department or the Federal Reserve, the answer is usually “no.” Generally, one finds that certain standards are used across the board. These include, but are not limited to, Triple DES, IDEA, RSA Key Exchange, Diffie-Hellman Key Exchange, and X.509 Certificates. Cost. In estimating the cost savings, be very careful of some of the quote estimates provided by VPN vendors. Be sure to differentiate between costs savings based on Internet models versus cost savings based on an intranet model. Sure, one might save up to 80 percent if all data is sent across the Internet, but the Internet is years away from being ready for mission-critical data. So, if information to be sent is mission critical, the Internet may not be an option. Bandwidth. Defining a bandwidth requirement and allowing for growth is especially tricky in today’s convergent environment. Video and voice traffic require much more consistent bandwidth than data traffic. It tends to fill up the pipe for longer periods of time. Imagine a large file transfer that lasts anywhere from five minutes to an hour. If packets drop in a data environment, it is usually transparent to the user. In a voice and video environment, one dropped packet can be very apparent and is more likely to gen265

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE erate a complaint. Be aware that current IPSec solutions max out at 90 Mbits simplex, which is about 45 Mbytes duplex. If one has OC-3 bandwidth requirements, then ATM may be the only choice. This is one area where it is very important to test. QoS and Performance. Understand that QoS, as it relates to convergence, may not be the same as QoS as defined in one’s service level agreement (SLA). The key is that the network has the capability to differentiate between different types of service. By putting voice, video, and data in the same pipe as defined by a single QoS, they will still stomp all over each other.

In evaluating and measuring QoS, remember that QoS is an additive effect. It may be necessary to simulate multiple switches or routers. Also, be sure to generate enough low-priority background traffic to tax the network. Important measurement criteria for QoS are delay, delay variation, dropped cells, and throughput. Transparency. Networks and network security should be invisible to the end user. This is one of the most important evaluation criteria in any security scheme. The end user must never know that a new security product has been implemented in one’s network. It is not the end user’s job to secure the network — it is yours. Nothing good can come from adversely affecting an end user in the performance of his duties. An end user will usually do one of two things: scream to the high heavens and sit around with his hands in his pockets until the problem is repaired, or find some way of subverting the network’s security — putting everyone at risk. Ease of Management. Ease of management is where the hedonists separate themselves from the masochists. Networking security solutions should be simple to implement and control. If they are not, then two very bad things might happen. One might accidentally subvert one’s own security, exposing the company’s network to risk and oneself to possible unemployment. One could cause the security solution to become opaque to the user, which is already known to be a bad thing from the previous paragraph. Make, sure there is a comprehensive checklist defined for evaluating a VPN’s management system. Keep in mind that, depending on a person’s experience with networking paradigms, two people could come up with very different answers in this category. One should probably go with the dumb lazy guy’s choice, assuming he was not too lazy to evaluate the network management interface. Objectivity. It is important to remain objective in evaluating one’s requirements. This is an opportunity to be a true scientist. Try not to become emotionally attached to any one technology or vendor. It clouds 266

Virtual Private Networks with Quality of Service one’s judgment and makes one annoying to be around at parties. The best solution to a specific problem is the important thing. Testing. Testing is where vendor claims hit the anvil of reality. Take the time to have evaluation criteria and a test plan in place well before the evaluation. Share it with and discuss it with vendors ahead of time. There is nothing to be gained by keeping a test plan secret. Many vendors will offer criteria for consideration and will help with test equipment. Good communication is a two-way street and will be absolutely essential in any installation.

Evaluations are opportunities to not only verify the vendor’s claims, but to test that vendor’s customer support capability. Does equipment arrive at the stated time? How long does one have to wait on hold before obtaining technical support online? Is the vendor willing to send someone in person to support the install? How important are you to them? These are factors that are often ignored, but are just as important as the technical criteria. SUMMARY VPNs and QoS networks are new technologies that developed in separate camps. In a number of instances, they may not be interoperable with each other, even in the same vendor’s product. Careful definition of one’s requirements and planning for an evaluation test period are absolute essentials in implementing a successful solution. Remember that, while vendors may not purposefully misrepresent their product, they do sometimes become confused on what they have now versus what they plan to have. Be careful to consider and prioritize requirements, and develop specific tests for those that are high priority. Be objective in studying what each of the technologies has to offer. The end result will be that people are able to communicate safely and more effectively; such a goal is worthy of some effort. ADDITIONAL READING John Vacca, Virtual Private Networks: Secure Access over the Internet, Data Communications Management, August 1999, No. 51-10-35. Donna Kidder, Enable the Future with Quality of Service, Data Communications Management, June 2000, No. 53-10-42.

267

This page intentionally left blank

Chapter 22

Storage Area Networks Meet Enterprise Data Networks Lisa M. Lindgren

Until recently, people who manage enterprise data networks and people who manage enterprise storage have had little in common. Each has pursued a separate path, with technology and solutions that were unique to their particular environments. Enterprise network managers have been busy building a secure and switched infrastructure to meet the increasing bandwidth and access demands of corporate intranets and extranets. Storage management has been more closely related with particular applications like data backup and data mirroring. Enterprises have built standalone storage area networks (SANs) to manage the exponentially increasing volume of data that must be stored, retrieved, and safeguarded. With recent announcements, some enterprises will begin to merge storage-related networks with their data networks. This move, while making financial sense in some cases and providing tangible benefits, will create new challenges for the enterprise data network. This chapter provides a look at the rationale for SANs, the evolution of SANs, and the implications for the enterprise data network. A few definitions are in order. A storage area network (SAN) is a network that is built for the purpose of moving data to, from, or between storage devices, such as tape libraries and disk subsystems. A SAN is built of many of the elements common in data networks — namely, switches, routers, and gateways. The difference is that these are not the same devices that are implemented in data networks. The media and protocols are different, and the nature of the traffic is different as well. A SAN is built to efficiently move very large data blocks and to allow organizations to manage a vast 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

269

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE amount of SAN-attached data. By contrast, a data network must accommodate both large file transfers and small transactions such as HTTP requests and responses, and 3270/5250-style transactions. A related term that one encounters when dealing with storage is network-attached storage, or NAS. This is not just a mix-up of the SAN acronym. An NAS is a device, often called a filer or an appliance, that is a dedicated storage device. It is attached to a data LAN (or, in some cases, a SAN) and allows end users or servers to write data to its local storage. An NAS separates the storage of the data from the client’s system and the typical LAN-based application server. The NAS implements an embedded or standard OS, and must mimic at least one network operating system and support at least one workstation operating system (WOS). Many NAS systems claim support for multiple NOSs and multiple WOSs. One common use of an NAS is to provide data backup without involving the CPU of a generalpurpose application server. In summary, a SAN is a storage infrastructure designed to store and manage terabytes of data for the enterprise. An NAS is a low-end device designed to service a workgroup and store tens or hundreds of gigabytes. However, they share a common benefit to the enterprise. Both SANs and NAS devices separate the data from the file server. This important benefit is explored in more detail later. Exhibit 1 depicts a basic SAN and its elements in addition to the relationship between a SAN and a data network with NAS devices. RATIONALE FOR SANs SANs allow the decoupling of data storage and the application hosts that access and process the data. The concept of decoupling storage from the application host and sharing data storage devices between application hosts is not new. Mainframe-based data centers have been configured in this way for many years. The unique benefit of SANs, as compared to mainframe-oriented storage complexes, is that a SAN supports a heterogeneous mix of different application hosts. Theoretically, a SAN could be comprised of back-office systems based on Windows NT, Web servers based on Linux, ERP systems based on Sun Solaris, and customer service applications based on OS/390. All hosts could seamlessly access data from a pool of common storage devices, including NAS devices, JBOD (just a bunch of disks), RAID (redundant array of inexpensive disks), tape libraries, tape backup systems, and CD-ROM libraries (see Exhibit 1). Decoupling the application host from the data storage can provide dramatically improved overall availability. Access to particular data is not dependent on the health of a single application host. When there is a oneto-one relationship between host and data, the host must be active and 270

Storage Area Networks Meet Enterprise Data Networks

Exhibit 1.

Conceptual Depiction of a Storage Area Network (SAN)

have sufficient available bandwidth to respond to a request for the data. SANs allow storage-to-storage connectivity so that certain procedures can take place without the involvement of an application host. For example, data mirroring, backup, and clustering can be easily implemented without impacting the mission-critical application hosts or the enterprise LAN or WAN. This enhances an organization’s overall high availability and disaster recovery abilities. SANs permit organizations to respond quickly to demands for increased storage. Without a SAN, the amount of storage available is proportionally related to the number of servers in the enterprise. This is a 271

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE critical benefit of SANs. Most organizations that have embarked upon E-commerce and E-business initiatives have discovered that their storage requirements are increasing almost exponentially. According to IBM, as organizations begin to perform business transactions via the Internet or extranet, they can expect to see information volume increase eightfold. SANs allow organizations to easily add new storage devices with minimal impact on the application hosts. SAN EVOLUTION AND TECHNOLOGY OVERVIEW Before SANs were around, the mainframe world and the client/server world had completely different storage media, protocols, and management systems. In the mainframe world, ESCON channels and ESCON directors provided a high-speed, switched infrastructure for data centers. An ESCON director is, in fact, a switch that allows mainframes and storage subsystems to be dynamically added and removed. ESCON operated initially at 10 MBps and eventually 17 MBps, which is significantly faster than its predecessor channel technology, Bus-and-Tag (4.5 MBps maximum). Networking of mainframe storage over a wide area network using proprietary protocols has also been available for many years, from vendors such as Network Systems Corporation and CNT. In the client/server world, Small Computer Systems Interface (SCSI) is an accepted and evolving standard. SCSI is a parallel bus that supports a variety of speeds, starting at 5 MBps for SCSI-1 and now supporting up to 320 MBps for the new Ultra320, although most devices installed operate at 20, 40, or 80 MBps. However, unlike the switched configurations possible with ESCON, SCSI is limited to a daisy-chaining configuration with a maximum of four, eight, or sixteen devices per chain, depending on which SCSI standard is implemented. There must be one “master” in the chain, which is typically the host server. It was the development and introduction of Fibre Channel technology that made SANs possible. Fibre Channel is the interconnect technology that allows organizations to build a shared or switched infrastructure for storage that parallels in many ways a data network. Fibre Channel: • Is a set of ANSI standards • Offers high speeds of 1 Gbps with a sustained throughput of 97 MBps (standard is scalable to up to 4 Gbps) • Supports point-to-point, arbitrated loop, and fabric (switched) configurations • Supports SCSI, IP, video, and raw data formats • Supports fiber and copper cabling • Supports distances up to 10 km 272

Storage Area Networks Meet Enterprise Data Networks Fibre Channel is used primarily for storage connectivity today. However, the Fibre Channel Industry Association (www.fibrechannel.com) positions Fibre Channel as a viable networking alternative to Gigabit Ethernet and ATM. They cite CAD/CAE, imaging, and corporate backbones as good targets for Fibre Channel networking. In reality, it is unlikely that Fibre Channel will gain much of a toe-hold in the enterprise network because it would require a wholesale conversion of NICs, drivers, and applications — the very reason that ATM has lost out to Gigabit Ethernet in many environments. SANs within a campus are built using Fibre Channel hubs, switches, and gateways. The hubs, like data networking hubs, provide a shared bandwidth approach. Hubs link individual elements together to form an arbitrated loop. Disk systems integrate a loop into the backplane and then implement a port bypass circuit so that individual disks are not swappable. Fibre Channel switches are analogous to Ethernet switches. They offer dedicated bandwidth to each device that is directly attached to a single port in a point-to-point configuration. Like LAN switches, Fibre Channel switches are stackable so that the switch fabric is scalable to thousands of ports. Host systems (i.e., PC server, mainframes) support Fibre Channel host adapter slots or cards. Many hosts are configured with a LAN or WAN adapter as well for direct access to the data network (see Exhibit 2). Newer storage devices have direct Fibre Channel adapters. Older storage devices can be integrated into the Fibre Channel fabric by connecting to an SCSI-toFC gateway or bridge. FIBRE CHANNEL DETAILS Fibre Channel has been evolving since 1988. It is a complex set of standards that is defined in approximately 20 individual standards documents under the ANSI standards body. Although a thorough overview of all of the details of this complex and comprehensive set of standards is beyond the scope of this chapter, the basics of Fibre Channel layers, protocols, speeds and media, topologies, and port types are provided. Like other networking technologies, Fibre Channel provides some of the services defined by the Open Systems Interconnect (OSI) seven-layer reference model. The Fibre Channel standards define the physical layer up to approximately the transport layer of the OSI model, broken down into five different layers: FC-0, FC-1, FC-2, FC-3, and FC-4. Fibre Channel itself does not define a particular transport or upper layer protocol. Instead, it defines mappings from several popular and common upper layer protocols (e.g., SCSI, IP) to Fibre Channel. Exhibit 3 summarizes the functions of the five Fibre Channel layers. 273

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE

Exhibit 2.

Components of a Storage Area Network

Exhibit 3. Fibre Channel Layers Layer

Functions

FC-0 FC-1 FC-2

Signaling, media specifications, receiver/transmitter specifications 8B/10B character encoding, link maintenance Frame format, sequence management, exchange management, flow control, classes of service, login/logout, topologies, segmentation and reassembly Services for multiple ports on one node Upper Layer Protocol (ULP) mapping • Small Computer System Interface (SCSI) • Internet Protocol (IP) • High Performance Parallel Interface (HIPPI) • Asynchronous Transfer Mode — Adaption Layer 5 (ATM-AAL5) • Intelligent Peripheral Interface — 3 (IPI-3) (disk and tape) • Single Byte Command Code Sets (SBCCS) • Future ULPs

FC-3 FC-4

Source: University of New Hampshire InterOperability Lab.

274

Storage Area Networks Meet Enterprise Data Networks Although its name may imply otherwise, the Fibre Channel standard supports transmission over both fiber and copper cabling for transmission up to the “full-speed” rate of 100 megabytes per second (MBps). Slower rates are supported, and products are currently available at half-, quarter-, and eighth-speeds representing speeds of 50, 25, and 12.5 MBps, respectively. Higher speeds of 200 and 400 MBps are also supported and implemented in today’s products, but only fiber cabling is supported at these higher speeds. The Fibre Channel standards support three different topologies: pointto-point, arbitrated loop, and fabric. A point-to-point topology is straightforward. A single cable connects two different end points, such as a server and a disk subsystem. The arbitrated loop topology is analogous to a shared media LAN such as Ethernet or Token Ring. Like a LAN, the devices on an arbitrated loop share the total bandwidth. This is a complex topology because issues like contention for the loop must be resolved, but it is the most common topology implemented today. The devices in an arbitrated loop can be connected from one to another in a ring-type topology, or a centralized hub can be implemented to allow for an easier and more flexible star-wired configuration. A single arbitrated loop can connect up to 127 devices, which is sufficient for many SAN implementations. The final topology is the fabric. This is completely analogous to a switched Fast Ethernet environment. The devices and hosts are directly attached, point-to-point, to a central switch. Each connection can utilize the full bandwidth of the connection. Switches can be networked together. The fabric can support up to 224 devices. The fabric is the topology that offers the maximum scalability and availability. Obviously, it is also the most costly of the three topologies. The Fibre Channel standards define a variety of different types of ports that are implemented in various products. Exhibit 4 provides a definition of the various types of ports. ACCOMMODATING SAN TRAFFIC ON THE ENTERPRISE DATA NETWORK Enterprises are widely implementing SANs to meet the growing demand for enterprise storage. The benefits are real and immediate. However, in some cases, the ten-kilometer limit of a SAN can be an impediment. For example, a disaster recovery scheme may require sending large amounts of data to a sister site located in another region of the country hundreds of miles away. For this and other applications, enterprises need to send SAN traffic over a WAN. This should not be done lightly, because WAN speeds are often an order of magnitude lower than campus speeds and the amount of data 275

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE Exhibit 4. Fibre Channel Port Types Port Type

Definition

N_Port

Node port, implemented on an end node such as a disk subsystem, server, or PC Port of the fabric, such as on an FC switch Arbitrated loop port, such as on an FC hub Node port that also supports arbitrated loop Fabric port that also supports arbitrated loop Connect FC switches together A port that may act either as an F_Port or an E_Port A G_Port that also supports arbitrated loop

F_Port L_Port NL_Port FL_Port E_Port G_Port GL_Port

can be enormous. However, there are very real and valid instances in which it is desirable or imperative to send storage traffic over a WAN, including: • Remote tape backup for disaster recovery • Remove disk mirroring for continuous business operations • Use of a storage service provider for outsourced storage services Enterprises have two basic choices in extending the SAN to the wide area. They can either build a stand-alone WAN that is used only for storage traffic, or they can integrate it with the existing data WAN. A stand-alone WAN can be built with proprietary protocols over high-speed links or it can utilize ATM. The obvious downfall of this approach is its high cost of ownership. If the links are not fully utilized for a large portion of the day and week, it may be difficult to justify a separate infrastructure and ongoing telecommunication costs. The advantage of this approach is that it dedicates bandwidth to storage management. A shared network approach may be viable in certain instances. With this approach, the SAN traffic shares the WAN with the traditional enterprise data network. Various approaches exist to allow this to happen. As already detailed, the Fibre Channel standards define a mapping for IP-over-FC so products that implement the IP will work natively over any IP-based data WAN. Other approaches encapsulate proprietary storage-oriented protocols (e.g., EMC’s proprietary remote data protocol, Symmetrix Remote Data Facility — SRDF) within TCP/IP so that the traffic is seamlessly transported on the WAN. What does all this mean to networking vendors and enterprise network managers? First and foremost, it means that the data WAN, already besieged with requests for increased bandwidth to support new E-business applications, may need to deal with a potentially huge new type of traffic not previously anticipated. The key in making a shared storage/data network work will be the cooperative planning between the affected IT organizations. For example, can the storage traffic only use the network during periods of low 276

Storage Area Networks Meet Enterprise Data Networks transaction traffic? What is the amount of data, and what is the window in which the transfer of data must be completed? What bandwidth management, quality-of-service, and queuing tools are available to allow the two environments to coexist peacefully? These are the critical questions that the enterprise data manager must ask to begin the process of defining a solution that will minimize the impact on the regular data traffic. SUMMARY Storage area networks (SANs) are being implemented in enterprises of all sizes. The separation of the storage of data from the application or file server has numerous benefits. Fibre Channel, a set of standards defined over a period of years to support high speeds and ubiquitous connectivity, offers the enterprise a variety of different topologies. However, in some cases, the SAN must be extended over a wide area data network. When this happens, the impact to the data network can be severe if proper planning and tools are not put in place. The enterprise data manager must understand the type, quantity, duration, and timing of the storage traffic to effectively integrate the storage data with the enterprise data network while minimizing the impact on both operations.

277

This page intentionally left blank

Chapter 23

Data Warehousing Concepts and Strategies Bijoy Bordoloi Stefan M. Neikes Sumit Sircar Susan E. Yager

Many IT organizations are increasingly adopting data warehousing as a way of improving their relationships with corporate users. Proponents of data warehousing technology claim the technology will contribute immensely to a company’s strategic advantage. According to Gartner, U.S. companies spent $7 billion in 1999 on the creation and operation of database warehouses; the amount spent on these techniques has grown by 35 percent annually since 1996.1 Companies contemplating the implementation of a data warehouse need to address many issues concerning strategies, type of the data warehouse, front-end tools, and even corporate culture. Other issues that also need to be examined include who will maintain the data warehouse and how often, and most of all, which corporate users will have access to it. After defining the concept of data warehousing, this chapter provides an in-depth look at design and construction issues; types of data warehouses and their respective applications; data mining concepts, techniques, and tools; and managerial and organizational impacts of data warehousing. HISTORY OF DATA WAREHOUSING The concept of data warehousing is best presented as part of an evolution that began about 35 years ago. In the early 1960s, the arena of computing was limited by punch cards, files on magnetic tape, slow access times, and an immense amount of overhead. About the mid-1960s, the near-explosive 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

279

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE growth in the usage of magnetic tapes increased the amount of data redundancy. Suddenly, new problems, ranging from synchronizing data after updating to handling the complexity of maintaining old programs and developing new ones, had to be resolved. The 1970s saw the rise of direct-access storage devices and the concomitant technology of database management systems (DBMSs). DBMSs made it possible to reduce the redundancy of data by storing it in a single place for all processing. Only a few years later, databases were used in conjunction with online transaction processing (OLTP). This advancement enabled the implementation of such applications as automated teller machines and reservations systems used by travel and airline industries to store up-todate information. By the early 1980s, the introduction of the PC and fourthgeneration technology let end users innovatively and more effectively utilize data in the database to guide decision making. All these advances, however, engendered additional problems, such as producing consistent reports for corporate data. It was difficult and timeconsuming to accomplish the step from pure data to information that gives meaning to the organization and to overcome the lack of integration across applications. Poor or nonexistent historical data only added to the problems of transforming raw data into intelligent information. This dilemma led to the realization that organizations need two fundamentally different sets of data. On one hand, there is primitive or raw data, which is detailed, can be updated, and is used to run the day-to-day operations of a business. On the other hand, there is summarized or derived data, which is less frequently updated and is needed by management to make higher-level decisions. The origins of the data warehouse as a subject-oriented collection of data that supports managerial decision making are therefore not surprising. Many companies have finally realized that they cannot ignore the role of strategic information systems if they are to attain a strategic advantage in the marketplace. CEOs and CIOs throughout the United States and the world are steadily seeking new ways to increase the benefits that IT provides. Data is increasingly viewed as an asset with as much importance in many cases as financial assets. New methods and technologies are being developed to improve the use of corporate data and provide for faster analyses of business information. Operational systems are not able to meet decision support needs for several reasons: • Most organizations lack online historical data. • The data required for analysis often resides on different platforms and operational systems, which complicates the issue further. 280

Data Warehousing Concepts and Strategies • The query performance of many operational systems is extremely poor, which in turn affects their performance. • Operational database designs are inappropriate for decision support. For these reasons, the concept of data warehousing, which has been around for as long as databases have existed, has suddenly come to the forefront. A data warehouse eliminates the decision support shortfalls of operational systems in a single, consolidated system. Data is thus made readily accessible to the people who need it, especially organizational decision makers, without interrupting online operational workloads. The key of a data warehouse is that it provides a single, more quickly accessible, and more accurately consolidated image of business reality. It lets organizational decision makers monitor and compare current and past operations, rationally forecast future operations, and devise new business processes. These benefits are driving the popularity of data warehousing and have led some advocates to call the data warehouse the center of IS architecture in the years ahead. THE BASICS OF DATA WAREHOUSING TECHNOLOGY According to Bill Inmon, author of Building the Data Warehouse,2 a data warehouse has four distinguishing characteristics: 1. 2. 3. 4.

Subject orientation Integration Time variance Nonvolatility

As depicted in Exhibit 1, the subject-oriented database characteristic of the data warehouse organizes data according to subject, unlike the application-based database. The alignment around subject areas affects the design and implementation of the data found in the data warehouse. For this reason, the major subject areas influence the most important part of the key structure. Data warehouse data entries also differ from applicationoriented data in the relationships. Although operational data has relationships among tables based on the business rules that are in effect, the data warehouse encompasses a spectrum of time. A data warehouse is also integrated in that data is moved there from many different applications (see Exhibit 2). This integration is noticeable in several ways, such as the implementation of consistent naming conventions, consistent measurement of variables, consistent encoding structures, and consistent physical attributes of data. In comparison, operational data is often inconsistent across applications. The preprocessing of information aids in reducing access time at the point of inquiry. 281

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE

Exhibit 1. The Data Warehouse Is Subject Oriented

Exhibit 3 shows the time-variant feature of the data warehouse. The data stored is up to five to ten years old and is used for making consistent comparisons, viewing trends, and providing a forecasting tool. Operational environment data reflects only accurate values as of the moment of access. The data in such a system may change at a later point in time through updates or inserts. On the contrary, data in the data warehouse is accurate as of some moment in time and will produce the same results every time for the same query. The time-variant feature of the data warehouse is observed in different ways. In addition to the lengthier time horizon as compared to the operational environment, time variance is also apparent in the key structure of a data warehouse. Every key structure contains — implicitly or explicitly — 282

Data Warehousing Concepts and Strategies

Exhibit 2. Integration of Data in the Data Warehouse

Exhibit 3. The Data Warehouse Is Time Variant

283

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE

Exhibit 4. The Data Warehouse Is Nonvolatile

an element of time, such as day, week, or month. Time variance is also evidenced by the fact that the data warehouse is never updated. Operational data is updated as the need arises. The nonvolatility of the warehouse means that there is no inserting, deleting, replacing, or changing of data on a record-by-record basis, as is the case in the operational environment (see Exhibit 4). This difference has tremendous consequences. At the design level, for example, there is no need to be cautious about update anomalies. It follows that normalization of the physical database design loses its importance because the design focuses on optimized access of data. Other issues that simplify data warehouse design involve the nonpresence of transaction and data integrity as well as detection and remedy of deadlocks, which are found in every operational database environment. Effective and efficient use of the data warehouse necessitates that the data warehouse run on a separate platform. If it does not, it will slow down the operations database and reduce response time by a large factor. DESIGN AND CONSTRUCTION OF A DATA WAREHOUSE Preliminary Considerations Like any other undertaking, a data warehouse project should demonstrate success early and often to upper management. This ensures high visibility and justification of the immense commitment or resources and costs asso284

Data Warehousing Concepts and Strategies ciated with the project. Before undertaking the design of the data warehouse, however, it is wise to remember that a data warehouse project is not as easy as copying data from one database to another and handing it over to users, who then simply extract the data with PC-based queries and reporting tools. Developers should not underestimate the many complex issues involved in data warehousing. These include architectural considerations, security, data integrity, and network issues. According to one estimate, about 80 percent of the time that is spent constructing a data warehouse is devoted to extracting, cleaning, and loading data. In addition, problems that may have gone undetected for years can surface during the design phase. The discovery of data that has never been captured as well as data that has been altered and stored are examples of these types of problems. A solid understanding of the business and all the processes that have to be modeled is also extremely important. Another major consideration important to up-front planning is the difference between the data warehouse and most other client/server applications. First, there is the issue of batch orientation for much of the processing. The complexity of processes (which may be executed on multiple platforms), data volumes, and resulting data synchronization issues must be correctly analyzed and resolved. Next, the data volume in a data warehouse, which can be in the terabyte range, must be considered. New purchases of large amounts of disk storage space and magnetic tape for backup should be expected. It is also vital to plan and provide for the transport of large amounts of data over the network. The ability of data warehousing to support a wide range of queries — from simple ones that return only limited amounts of information to complex ones that might access several million rows — can cause complications. It is also necessary to incorporate the availability of corporate metadata into this thought process. The designers of the data warehouse have to remember that metadata is likely to be replicated at multiple sites. This points to the need for synchronization across the different platforms to avoid inconsistencies. Finally, security must be considered. In terms of location and security, data warehouse and non-data warehouse applications must appear seamless. Users should not need different IDs to sign on to different systems, but the application should be smart enough to provide users the correct access with only one password. Designing the Warehouse After having addressed all the preliminary issues, the design task begins. There are two approaches to designing a data warehouse: the top-down 285

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE approach and the bottom-up approach. In the top-down approach, all of an organization’s business processes are analyzed to build an enterprisewide data warehouse in one step. This approach requires an immense commitment of planning, resources, and time, and it results in a new information structure from which the entire organization benefits. The bottom-up approach, on the other hand, breaks down the task and delivers only a small subset of the data warehouse. New pieces are then phased in until the entire organization is modeled. The bottom-up approach allows data warehouse technology to be quickly delivered to a part of the organization. This approach is recommended because its time demands are not as rigorous. It also allows development team members to learn as they implement the system, identify bottlenecks and shortfalls, and find out how to avoid them as additional parts of the data warehouse are delivered. Because a data warehouse is subject oriented, the first design step involves choosing a business subject area to be modeled and eliciting information about the following: • • • •

The business process that needs to be modeled The facts that need to be extracted from the operational database The level of detail required Characteristics of the facts (e.g., dimension, attribute, and cardinality)

After each of these areas has been thoroughly investigated and more information about facts, dimension, attributes, and sparsity has been gathered, yet another decision must be made. The question now becomes which schema to use for the design of the data warehouse database. There are two major options: the classic star schema and the snowflake schema. The Star Schema. In the star design schema, a separate table is used for each dimension, and a single large table is used for the facts (see Exhibit 5). The fact table’s indexed key contains the keys of the different dimension tables.

With this schema, the problem of sparsity, or the creation of empty rows, is avoided by not creating records where combinations are invalid. Users are able to follow paths for detailed drilldowns and summary rollups. Because the dimension tables are also relatively small, precalculated aggregation can be embedded within the fact table, providing extremely fast response times. It is also possible to apply multiple hierarchies against the same fact table, which leads to the development of a flexible and useful set of data. The Snowflake Schema. The snowflake schema depicted in Exhibit 6 is best used when there are large dimensions, such as time. The dimension 286

Data Warehousing Concepts and Strategies

Exhibit 5. The Star Design Schema

tables are split at the attribute level to provide a greater variety of combinations. The breakup of the time dimension into a quarter entity and a month entity provides more detailed aggregation and also more exact information. DECISION SUPPORT SYSTEMS AND DATA WAREHOUSING Because many vendors offer decision support system (DSS) products and information on how to implement them abounds, insight into the different technologies available is helpful. Four concepts should be evaluated in terms of their usability for decision support and relationship to the socalled real data warehouse: (1) virtual data warehouses, (2) multidimensional online analytical processing (OLAP), (3) relational OLAP, and (4) Web-based data warehouses. The Virtual Data Warehouse The virtual data warehouse promises to deliver the same benefits as a real data warehouse, but without the associated amount of work and difficulty. The virtual data warehouse concept can be subdivided into the surround 287

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE

Exhibit 6. The Snowflake Design Schema

data warehouse and the OLAP/data mart warehouse. In a surround data warehouse, legacy systems are merely surrounded with methods to access data without a fundamental change in the operational data. The surround concept thus negates a key feature of the real data warehouse, which integrates operational data in a way that allows users to make sense of it. In addition, the data structure of a virtual data warehouse does not lend itself to DSS (decision support system) processing. Legacy operational systems were built to ease updating, writing, and deleting, and not with simple data extraction in mind. Another deficiency with this technology is the minimal amount of historical data that is kept, usually only 60 to 90 days worth of information. A real data warehouse, on the other hand, with its

288

Data Warehousing Concepts and Strategies five to ten years worth of information, provides a far superior means of analyzing trends. In the case of direct OLAP/data marts, legacy data is transferred directly to the OLAP/data mart environment. Although this approach recognizes the need to remove data from the operational environment, it too falls short of being a real data warehouse. If only a few small applications were feeding a data mart, the approach would be acceptable. The reality is, however, that there are many applications and thus many OLAP/data mart environments, each requiring a customized interface, especially as the number of OLAP/data marts increases. Because the different OLAP/data marts are not effectively integrated, different users may arrive at different conclusions when analyzing the data. As a result, it is possible for the marketing department to report that the business is doing fine and another department to report just the opposite. This drawback does not exist with the real data warehouse, where all data is integrated. Users who examine the data at a certain point in time would all reach the same conclusions. Multidimensional OLAP Multidimensional database technology is a definite step up from the virtual data warehouse. It is designed for executives and analysts who want to look at data from different perspectives and have the ability to examine summarized and detailed data. When implemented together with a data warehouse, multidimensional database technology provides more efficient and faster access to corporate data. Proprietary multidimensional databases facilitate the hierarchical organization of data in multiple dimensions, allowing users to make advanced analyses of small portions of data from the data warehouse. The technology is understandably embraced by many in the industry because of its increased usability and superior analytical functionality. As a stand-alone technology, multidimensional OLAP is inferior to a real data warehouse for a variety of reasons. The main drawback is that the technology is not able to handle more than 20 to 30 gigabytes of data, which is unacceptable for most of the larger corporations with needs ranging from 100 gigabytes to several terabytes. Furthermore, multidimensional databases do not have the flexibility and measurability required of today’s decision support systems because they do not support the necessary ad hoc creation of multidimensional views of products and customers. Multidimensional databases should be considered for use in smaller organizations or at a department level only. 289

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE Relational OLAP Relational OLAP is also used with many decision support systems and provides sophisticated analytical capability in conjunction with a data warehouse. Unlike multidimensional database technology, relational OLAP lets end users define complex multidimensional views and analyze them. These advantages are only possible if certain functionalities are incorporated into relational OLAP. Users must be removed from the process of generating their own Structured Query Language (SQL). Multiple SQL statements should be generated by the system for every analysis request to the data warehouse. In this way, a set of business measurements (e.g., comparison and ranking measurements) is established, which is essential to the appropriate use of the technology. Although relational OLAP technology works well in conjunction with a data warehouse, the technology by itself is somewhat limited. Examination of the three preceding decision support technologies leads to the only correct deduction — that data warehouse technology is still best suited to larger firms. The benefit of having integrated, cleansed data from legacy systems, together with historical information about the business, makes a properly implemented data warehouse the primary choice for decision support. Web-Based Data Warehouses The Internet and World Wide Web have already exhibited their influence on all aspects of today’s business environment, and they are exerting a significant impact on data warehousing as well.3 Web-based data warehouses (often called data Webhouses) are Web instantiations of data warehouses.4 The basic purpose of the data Webhouse is to provide information to internal management, employees, customers, and business partners, and to promote the exchange of experiences and ideas for problem resolution and effective decision making. Use of a Web-based architecture provides ease of access, platform independence, and lower cost than traditional data warehouses. A data Webhouse can be considered a marriage of data warehousing and the Web, resulting in a more robust interface capable of presenting information to the users in a desirable format. Any user with a Web browser can access stored information, including teleworkers, sales representatives at customer sites, customers, suppliers, and business partners. In addition, the number of mobile business users is growing rapidly.5 Although data warehouses are becoming increasingly popular as tools for E-business, customer relation management (CRM), and supply-chain management (SCM), the required distribution of a data Webhouse con290

Data Warehousing Concepts and Strategies trasts with the centralized, multidimensional, traditional data warehouse. In a data Webhousing environment, data is gathered from different nodes spread across the network. The architecture is typically three-tiered, including client, Web server, and application server; and the Internet/intranet/extranet is the communication medium between the client and servers. Three of the biggest challenges to this architecture involve scalability, speed, and security. Although the Internet/intranet/extranet provides ease of access to interested and involved parties, it can be extremely difficult to estimate the number of users who will be accessing the data warehouse concurrently. Unmanaged congestion over the network can lead to slower transmission, lower performance, increased server problems, and lower user satisfaction. Any transmission over a network exposes both the information, and the network itself, to security risks. One possible solution to these risks is to make decision-making data and tools available only to users by a more secured channel, such as an intranet. THE BENEFITS OF WAREHOUSING FOR DATA MINING The technology of data mining is closely related to that of data warehousing. It involves the process of extracting large amounts of previously unknown data and then using the data to make important business decisions. The key phrase here is “unknown information,” buried in the huge mounds of operational data that, if analyzed, provides relevant information to organizational decision makers. Significant data is sometimes undetected because most data is captured and maintained by a particular department. What may seem irrelevant or uninteresting at the department level may yield insights and indicate patterns important at the organizational level. These patterns include market trends, such as customer buying patterns. They aid in such areas as determining the effectiveness of sales promotions, detecting fraud, evaluating risk and assessing quality, or analyzing insurance claims. The possibilities are limitless and yield a variety of benefits ultimately leading to improved customer service and business performance. Data provides no real value to business users if it is located on several different systems, in different formats and structures, and redundantly stored. This is where the data warehouse comes into play as a source of consolidated and cleansed data that facilitates analysis better than do regular flat files or operational databases. Three steps are needed to identify and use hidden information: 1. The captured data must be incorporated into a view of the entire organization, instead of only one department. 2. The data must be analyzed, or mined, for valuable information. 291

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE 3. The information must be specially organized to simplify decision making. Data Mining Tasks In data mining, data warehouses, query generators, and data interpretation systems are combined with discovery-driven systems to provide the ability to automatically reveal important yet hidden data. The following tasks need to be completed to make full use of data mining: 1. 2. 3. 4.

Creating prediction and classification models Analyzing links Segmenting databases Detecting deviations

Creating Models. The first task makes use of the data warehouse’s contents to automatically generate a model that predicts desired behavior. In comparison to traditional models that use statistical techniques and linear and logistic regression, discovery-driven models generate accurate models that are also more comprehensible because of their sets of if-then rules. The performance of a particular stock, for example, can be predicted to assess its suitability for an investment portfolio. Analyzing Links. The goal of the link analysis is to establish relevant connections between database records. An example here is the analysis of items that are usually purchased together, like a washer and dryer. Such analysis can lead to a more effective pricing and selling strategy. Segmenting Databases. When segmenting databases, collections of records with common characteristics or behaviors are identified. One example is the analysis of sales for a certain time period (such as President’s Day or Thanksgiving weekend) to detect patterns in customer purchase behavior. For the reasons discussed previously, this is an ideal task for a data warehouse. Detecting Deviations. The fourth and final task involves detection of deviation, which is the opposite of data segmentation. Here, the goal is to identify records that vary from the norm, or lie outside of any particular cluster with similar characteristics. This discovery from the cluster is then explained as normal or as a hint of a previously unknown behavior or attribute.

Web-Based Data Mining Data mining in the Web environment has been termed “Web mining,” the application of data mining techniques to Web resources and activities.8 Three categories of Web mining include: (1) content mining, (2) structure 292

Data Warehousing Concepts and Strategies mining, and (3) usage mining. Web content mining is an automatic process for extracting online information. Web structure mining uses the analysis of link structures on the Web to identify more preferable documents. Web usage mining records and accumulates information about user interactions with Web sites. This type of information has proven invaluable to firms by allowing the tailoring of content and offerings to best serve customers and maximize potential sales. Data Mining Techniques At this point, it is important to present several techniques that aid mining efforts. These techniques include the creation of predictive models, and performing supervised induction, association, and sequence discovery. Creating Predictive Models. The creation of a so-called predictive model is facilitated through numerous statistical techniques and various forms of visualization that ease the user’s recognition of patterns. Supervised Induction. With supervised induction, classification models are created from a set of records, which is referred to as the training set. This method makes it possible to infer from a set of descriptors of the training set to the general. In this way, a rule might be produced that states that a customer who is male, lives in a certain zip code area, earns $25,000 to $30,000 per year, is between 40 and 45 years of age, and listens more to the radio than watches TV might be a possible buyer for a new camcorder. The advantage of this technique is that the patterns are based on local phenomena, whereas statistical measures check for conditions that are valid for an entire population. Association Discovery. Association discovery allows for the prediction of the occurrence of some items in a set of records if other items are also present. For example, it is possible to identify the relationship among different medical procedures by analyzing claim forms submitted to an insurance company. With this information the prediction could be made, within a certain margin of error, that the same five medicines are usually required for treatment. Sequence Discovery. Sequence discovery aids the data miner by providing information on a customer’s behavior over time. If a certain person buys a VCR this week, he or she usually buys videotapes on the next purchasing occasion. The detection of such a pattern is especially important to catalog companies because it helps them better target their potential customer base with specialized advertising catalogs. 293

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE

Exhibit 7. Neural Network

Data Mining Tools The main tools used in data mining are neural networks, decision trees, rule induction, and data visualization. Neural Networks. A neural network consists of three interconnected layers: an input, an output layer, and a hidden layer in between (see Exhibit 7). The hidden processing layer is like the brain of the neural network because it stores or learns rules about input patterns and then produces a known set of outputs. Because the process of neural networks is not transparent, it leaves the user without a clear interpretation of the resulting model, which, nevertheless, is applied. Decision Trees. Decision trees divide data into groups based on the values that different variables take on (see Exhibit 8). The result is often a complex hierarchy of classifying data, which enables the user to deduct possible future behavior. For example, it might be deducted that for a person who only uses a credit card occasionally, there is a 20 percent probability that an offer for another credit card would be accepted. Although decision trees are faster than neural networks in many cases, they do have drawbacks. One of these is the handling of data ranges, such as age groups, which can inadvertently hide patterns.

294

Data Warehousing Concepts and Strategies

Exhibit 8. Decision Tree

Rule Induction. The method of rule induction is applied by creating nonhierarchical sets of possibly overlapping conditions. This is accomplished by first generating partial decision trees. Statistical techniques are then used to determine which decision trees to apply to the input data. This method is especially useful in cases where there are long and complex condition lists. Data Visualization. Data visualization is not really a data mining tool. However, because it provides a picture for the user with as many as four graphically represented variables, it is a powerful tool for providing concise information. The graphics products available make the detection of patterns much easier than is the case when more numbers are analyzed.

Because of the pros and cons of the various data mining tools, software vendors today incorporate all or some of them in their data mining packages. Each tool is essentially a matter of looking at data by different means and from different angles. One of the potential problems in data mining is related to performance. To get faster processing, it might be necessary to subset the data, either by the number of rows accessed or by the number of variables examined. This can lead to slightly different conclusions about a data set. Consequently, in most cases it is better to wait for the correct answer using a large sample.

295

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE MANAGERIAL AND ORGANIZATIONAL IMPACTS OF DATA WAREHOUSING Although organizational managers eagerly await the completion of a data warehouse, many issues must be dealt with before the fruits of this new technology are harvested. This is especially true in today’s fast changing enterprise with its quick reaction time. The subject of economic benefit also deserves mention when dealing with data warehousing because some projects have already acquired the reputation of providing little or no payback on the huge technology investments involved. Data warehouses are sometimes accused of being pits into which data disappear, never to be seen again. Managers must understand at the outset that the quality of the data is of extreme importance in a data warehousing project. The sometimes-difficult challenge for management is to make data entering the data warehouse consistent. In some organizations, data is stored in flat, VSAM, IMS, IDMS, or SA files and a variety of relational databases. In addition, different systems, designed for different functions, contain the same terms but with different meanings. If care is not taken to clean up this terminology during data warehouse construction, misleading management information results. The logical consequence of this requirement is that management has to agree on the data definition for elements in the warehouse. This is yet another challenging task. People who use the data in the short term and the long term must have input into the process and know what the data means. The manager in charge of loading the data warehouse has four ways to handle erroneous data: 1. If the data is inaccurate, it must be completely rejected and corrected in the source system. 2. Data can also be accepted as is, if it is within a certain tolerance level and if it is marked as such. Capture and correct the data before it enters the warehouse. Capture and correction are handled programmatically in the process of transforming data from one system to the data warehouse. An example might be a field that was in lowercase and needs to be stored in uppercase. 3. Replace erroneous data with a default value. If, for example, the date February 29 of a non-leap year is defaulted to February 28, there is no loss in data integrity. 4. Data warehousing affects management, and organizations in general, in today’s business motto of “working smarter, not harder.” Today’s data warehouse users can become more productive because 296

Data Warehousing Concepts and Strategies they will have the tools to analyze the huge amounts of data that they store, rather than just collect it. Organizations are also affected by the invalid notion that implementing data warehousing technology simply consists of integrating all pertinent existing company data into one place. Managers need to be aware that data warehousing implies changes in the job duties of many people. For example, in an organization implementing a data warehouse, data analysis and modeling become much more prevalent than just requirements analysis. The database administrator position does not merely involve the critical aspects of efficiently storing data, but takes on a central role in the development of the application. Furthermore, because of its data model-oriented methodology, data warehouse design requires a development life cycle that does not fully follow traditional development approaches. The development of a data warehouse virtually begins with a data model, from which the warehouse is built. In summary, it must be noted that data warehouses are high-maintenance systems that require their own support staff. In this way, experienced personnel implement future changes in a timely manner. It is also important to remember that users will probably abandon a technically advanced and fast warehouse if it adds little value from the start — thus reiterating the immense importance of clean data. One of the most important issues that is often disregarded during the construction and implementation of a data warehouse is data quality. This is not surprising because in many companies the concern for data quality in regard to legacy and transaction systems is not a priority. Accordingly, when it comes to ensuring the quality of data being moved into the warehouse, many companies continue with their old practices. This can turn out to be a costly mistake and has already led to many failures of corporate warehousing projects. As more and more companies are making use of these strategic database systems, data quality must become the numberone priority of all parties involved with data warehousing effort. Unreliable and inaccurate data in the data warehouse cause numerous problems. First and foremost, the confidence of the users in this technology is shattered and contributes to the already existing rift between business and IT. Furthermore, if the data is used for strategic decision making, unreliable data hurts not only the IT department, but also the entire company. One example is banks that had erroneous risk exposure data on a Texas-based business. When the oil market slumped in the early 1980s, those banks that had many Texas accounts encountered major losses. In another case, a manufacturing firm scaled down its operations and took action to rid itself of excess inventory. Because of inaccurate data, it had overestimated the inventory requirements and was forced to sell off criti297

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE cal business equipment. Such examples demonstrate the need and importance of data quality. Poor-quality data appears to be the norm rather than the exception, and points out that many technology managers have largely ignored the issue of quality. This is caused, in part, by the failure to recognize the need to manage data as a corporate asset. One cannot simply allow just anything to be moved into a data warehouse, or it will become useless and might be likened to a “data garbage dump.” To avoid data inaccuracies and their potential for harboring disasters, general data quality awareness must be emphasized. There are critical success factors that each company needs to identify before moving forward with the issue of data quality: • Senior management must make a commitment to the maintenance of the quality of corporate data. This can be achieved by instituting a data administration department that oversees the management of the corporate data resource. Furthermore, this department will establish data management standards, policies, procedures, and guidelines pertaining to data and data quality. • Data quality must be defined. For data to be useful, it must be complete, timely, accurate, valid, and consistent. Data quality does not simply consist of data “scrubbing” or auditing to measure its usefulness. The definition of data quality also includes the degree of quality that is required for each element being loaded into the data warehouse. If, for example, customer addresses are stored, it might be acceptable that the four-digit extension to the zip code is missing. However, the street address, city, and state are of much higher importance. Again, this must be identified by each individual company and for each item that is used in the data warehouse. • The quality assurance of data must be considered. Because data is moved from transactional/legacy systems to the data warehouse, the accuracy of this data needs to be verified and corrected if necessary. This might be the largest task because it involves the cleansing of existing data. No company is able to rectify all of its unclean data, and therefore procedures have to be put in place to ensure data quality at the source. Such a task can only be achieved by modifying business processes and designing data quality into the system. In identifying every data item and its usefulness to the ultimate users of this data, data quality requirements can be established. One might argue that this is too costly; but keep in mind that increasing the quality of data as an after-the-fact task is five to ten times more expensive than capturing it correctly at the source. If companies want to use data warehousing as a competitive advantage and reap its benefits, data quality must become one of the most important 298

Data Warehousing Concepts and Strategies issues. Only when data quality is recognized as a corporate asset, and treated as such by every member of the organization, will the promised benefits of a data warehouse initiative be realized. CONCLUSION The value of warehousing to an organization is multidimensional. An enterprise-wide data warehouse serves as a central repository for all data names used in an organization, and therefore simplifies business relationships among departments by using one standard. Users of the data warehouse get consistent results when querying this database and understand the data in the same way and without ambiguity. By its nature, the data warehouse also allows quicker access to summarized data about products, customers, and other business items of interest. In addition, the historical aspect of such a database (i.e., information kept for five to ten years) allows users to detect and analyze patterns in the business items. Organizations beginning to build a data warehouse should not undertake the task lightheartedy. It does not simply involve moving data from the operational database to the data warehouse, but rather the cleansing of data to improve its future usefulness. It is also important to distinguish between the different types of warehouse technologies (i.e., relational OLTP, multidimensional OTLP, and virtual data warehouses) and understand their fundamental differences. Other issues that need to be addressed and resolved range from creating a team dedicated to the design, implementation, and maintenance of a data warehouse, to the need for top-level support from the outset and management education on the concepts and benefits of corporate sharing of data. A further benefit of data warehousing results from the ability to mine the data using a variety of tools. Data mining aids corporate analysts in detecting customer behavior patterns, finding fraud within the organization, developing marketing strategies, and detecting inefficiencies in the internal business processes. Because the subject of data warehousing is immensely complex, outside assistance is often beneficial. It provides organizational members with training in the technology and exposure, both theoretical and hands-on, which enables them to continue with later phases of the project. The data warehouse is, without doubt, one of the most exciting technologies of our time. Organizations that make use of it increase their chances of improving customer service and developing more effective marketing strategies. 299

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE References 1. Chordas, L., “Building a Better Warehouse,” Best’s Review, 101, 117, 2001. 2. Inmon, W.H., Building the Data Warehouse, John Wiley & Sons, New York, 1993. 3. Marakas, G.M., Decision Support Systems in the 21st Century, 2nd ed., Prentice Hall, Upper Saddle River, NJ, 2003, 319. 4. Peiris, C., “Is It Data Webhouse or Warehouse?,” accessed at http://www.chrispeiris.com, September, 2002. 5. Chen, L. and Frolick, M., “Web-Based Data Warehousing: Fundamentals, Challenges, and Solution,” Information Systems Management, 17, 80, 2000. 6. Zhu, T., “Web Mining,” University of Alber ta Edmonton, accessed at http://www.cs.ualber ta.ca, September, 2002.

300

Chapter 24

Data Marts: Plan Big, Build Small John van den Hoven

In today’s global economy, enterprises are challenged to do more with less in order to successfully compete with a host of competitors: big and small, new and old, domestic and international. With less people resources and less financial resources with which to operate and ever-growing volumes of data, enterprises need to better manage and leverage their information resources to operate more efficiently and effectively. This requires improved access to timely, accurate, and consistent data that can be easily shared with other team members, decision makers, and business partners. It is currently acknowledged that data warehousing is the most effective way to provide this business decision support data. Under this concept, data is copied from operational systems and external information providers, then conditioned, integrated and transformed into a read-only database that is optimized for direct access by the decision maker. The term “data warehousing” is particularly apt in that it describes data as being an enterprise asset that must be identified, cataloged, and stored using discipline, structure, and organization to ensure that the user will always be able to find the correct information when it is needed. Data warehousing is a popular topic in information technology and business journals, and at computer conferences. Like many areas of information technology, data warehousing has attracted advocates who peddle it as a panacea for a wide range of problems. Data warehousing is just a natural evolution of decision support technology. Although the concept of data warehousing is not new, it is only recently that the techniques, methodologies, software tools, database management systems, disk storage, networks, and processor capacity have all advanced to the point where it has become possible to deliver an effective working product. DATA MARTS AND DATA WAREHOUSES The term “data warehousing” can be applied to a broad range of approaches for providing improved access to business decision support 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

301

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE data. These approaches can range from the simple to the more complex, with many variations in between. However, there are two major approaches, which differ greatly in scale and complexity: the data mart and the data warehouse. One approach to data warehousing is the data mart. A data mart is a subject-oriented or department-oriented data warehouse. It is a scaled-down version of a data warehouse that focuses on the local needs of a specific department such as finance or sales. A data mart contains a subset of the data that would be in an enterprise’s data warehouse because it is subject or department-oriented. An enterprise may have many data marts, each focused on a subset of the enterprise. A data warehouse is an orderly and accessible repository of known facts or things from many subject areas, used as a basis for decision making. In contrast to the data mart approach, the data warehouse is generally enterprisewide in scope. Its goal is to provide a single, integrated view of the enterprise’s data, spanning all the enterprise’s activities. The data warehouse consolidates the various data marts and reconciles the various departmental perspectives into a single enterprise perspective. There are advantages and disadvantages associated with both the data mart and data warehouse approaches. These two approaches differ in terms of the effort required to implement them, in their approaches to data, supporting technology, and in the way the business and the users utilize these systems (see Exhibit 1 for more details). The effort required to implement a data mart is considerably less than that required to implement a data warehouse. This is generally the case because the scope of a data mart is a subject area encompassing the applications in a business area versus the multiple subject areas of the data warehouse, which can cover all major applications in the enterprise. As a result of its reduced scope, a data mart typically requires an order of magnitude less effort than a data warehouse, and it can be built in months rather than years. Therefore, a data mart generally costs considerably less than a data warehouse — tens or hundreds of thousands of dollars versus the millions of dollars necessary for a data warehouse. The effort is much less because a data mart generally covers fewer subject areas, has fewer users, and requires less data transformation, thus resulting in reduced complexity. In contrast, a data warehouse is cross-functional, covering multiple subject areas, has more users, and is a more complex undertaking because conflicting business requirements and perspectives must be reconciled to establish a centralized structured view for all the data in the enterprise. 302

Data Marts: Plan Big, Build Small Exhibit 1.

Contrasts between a Data Mart and a Data Warehouse Data Mart

Data Warehouse

Scope

A subject area

Many subject areas

Time to build

Months

Years

Cost to build

Tens of thousands to hundreds Millions of dollars of thousands of dollars

Complexity to build

Low to medium

High

Requirement for sharing

Shared (within business area)

Common (across enterprise)

Sources

Few operational and external systems

Multiple operational and external systems

Size

Megabytes to low gigabytes

Gigabytes to terabytes

Time horizon

Near-current and historical data

Historical data

Amount of data transformations

Low to medium

High

Frequency of update

Hourly, daily, weekly

Weekly, monthly

Hardware

Workstations and departmental servers

Enterprise servers and mainframe computers

Operating system

Windows and Linux

UNIX, Z/OS, OS/390

Database

Workgroup or standard database servers

Enterprise database servers

Effort

Data

Technology

Usage Number of concurrent users Tens

Hundreds

Type of users

Business area analysts and managers

Enterprise analysts and senior executives

Business focus

Optimizing activities within the business area

Cross-functional optimization and decision making

From a data perspective, a data mart has reduced requirements for data sharing because of its limited scope compared to a data warehouse. It is simpler to provide shared data for a data mart because it is only necessary to establish shared data definitions for the business area or department. In contrast, a data warehouse requires common data, which necessitates establishing identical data definitions across the enterprise — a much more complex and difficult undertaking. It is also often easier to provide timely data updates to a data mart than a data warehouse because it is smaller (megabytes to low gigabytes for a data mart versus gigabytes to terabytes for a data warehouse), requires less complex data transformations, and the enterprise does not have to synchronize 303

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE data updates from multiple operational systems. Therefore, it is easier to maintain data consistency within the data mart but difficult to maintain data consistency across the various data marts within an enterprise. The smaller size of the data mart enables more frequent updates (daily or weekly or, in some cases, hourly or near-real-time) than what is generally feasible for a data warehouse (weekly or monthly). This enables a data mart to contain near-current data in addition to the historical data that is normally contained in a data warehouse. From a supporting technology perspective, a data mart can often use existing technology infrastructure or lower-cost technology components, thus reducing the cost and complexity of the data warehousing solution. The computing platform and the database management system are two key components of the technology infrastructure of the data warehousing solution. Data warehousing capabilities are increasingly becoming part of the core database management systems from Microsoft Corporation, Oracle Corporation, and IBM. In terms of the computing platform, a data mart often resides on Intelbased computers running Windows or Linux. In contrast, a data warehouse often resides on a RISC-based (Reduced Instruction Set Computer) computer running the UNIX operating system or on a mainframe computer running the Z/OS or OS/390 in order to support larger data volumes and larger numbers of business users. In terms of the database management system, a data mart can also often be deployed using a lower-cost workgroup or standard relational database management system. Microsoft SQL Server 2000 is the leading platform for data mart deployment, with competition coming from Oracle Database Standard Edition and IBM DB2 Universal Database Workgroup Edition. In contrast, a data warehouse often requires a more expensive and more powerful database server. Oracle Database Enterprise Edition and IBM DB2 Universal Database Enterprise Edition are the leading platforms for data warehouses, with Microsoft SQL Server 2000 Enterprise Edition emerging as a challenger. In addition to different supporting technologies, the way in which the business and the users utilize these data warehousing solutions is also different. There are fewer concurrent users in a data mart than in a data warehouse. These users are often functional managers such as sales executives or financial executives who are focused on optimizing the activities within their specific department or business area. In contrast, the users of a data warehouse are often analysts or senior executives making decisions that are cross-functional and require input from multiple areas of the business. 304

Data Marts: Plan Big, Build Small Thus, the data mart is often used for more operational or tactical decision making, while the data warehouse is used for strategic decision making and some tactical decision making. A data mart is therefore a more short-term, timely data delivery mechanism, while a data warehouse is a longer-term reliable history or archive of enterprise data. PLAN BIG, START SMALL There is no one-size-fits-all strategy. An enterprise’s data warehousing strategy can progress from a simple data mart to a complex data warehouse in response to user demands, the enterprise’s business requirements, and the enterprise’s maturity in managing its data resource. An enterprise can also derive a hybrid strategy that utilizes one or more of these base strategies to best fit its current applications, data, and technology architectures. The right approach is the data warehouse strategy that is appropriate to the business need and the perceived benefits. For many enterprises, a data mart is often a practical first step to gain experience in building and managing a data warehouse, while introducing business users to the benefits of improved access to their data, and generally demonstrating the business value of data warehousing. However, these data marts often grow rapidly to hundreds of users and hundreds of gigabytes of data derived from many different operational systems. Therefore, planning for eventual growth should be an essential part of a data mart project. A balance is required between starting small to get the data mart up and running quickly, and planning for the bigger data mart or data warehouse, which will likely be required over time. Therefore, it is important to “plan big and start small.” That is, to implement a data mart within the context of an overall architecture for the data, technology, and application which allows the data mart to support more data, more users, and more sophisticated and demanding uses over time. Technology advances such as Internet/intranet technology, portals, prepackaged analytical applications, improved data warehouse management tools, and virtual data warehousing architectures are making this more easily doable. Otherwise, the enterprise will be implementing a series of independent and isolated data marts that recreate the jumble of systems and “functional silos” that data warehousing was trying to remedy in the first place. CONCLUSION The enterprise data warehouse is the ideal because it will provide a consistent and comprehensive view of the enterprise, with business users using common terminology and data throughout the enterprise. However, it remains an elusive goal for most enterprises because it is very difficult and 305

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE costly to achieve with today’s technology and today’s rapidly changing business environment. A more cost-effective option for many enterprises is the data mart. It is a more manageable data warehousing project that can focus on delivering value to a specific business area. Thus, it can provide many of the decision support capabilities without incurring the cost and complexity associated with a centralized enterprise data warehouse. With proper planning, these data marts can be gradually consolidated under a common management umbrella to create an enterprise data warehouse as it makes business sense and as the technology evolves to better support this architecture.

306

Chapter 25

Data Mining: Exploring the Corporate Asset Jason Weir

Data mining, as a methodology, is a set of techniques used to uncover previously obscure or unknown patterns and relationships in very large databases. The ultimate goal is to arrive at comprehensible, meaningful results from an extensive analysis of information. For companies with ver y large and complex databases, discover y-based data mining approaches must be implemented in order to realize the complete value that data offers. Companies today generate and collect vast amounts of data that the ongoing process of doing business produces. Web-based commerce and electronic business solutions have greatly increased the amount data available for further processing and analysis. Transaction data such as that produced by inventory, billing, shipping and receiving, and sales systems is stored in organizational or departmental databases. It is understood that data represents a significant competitive advantage, but realizing its full potential is not simple. Decision makers must be able to interpret trends, identify factors, or utilize information based on clear, timely data in a meaningful format. For example, a marketing director should be able to identify a group of customers, 18 to 24 years of age, who own notebook computers and need to or are likely to purchase an upcoming collaboration software product. After identifying those people, the director sends them advance offers, information, or product order forms to increase product pre-sales. Data mining, as a methodology, is a set of techniques used to uncover previously obscure or unknown patterns and relationships in very large databases. The ultimate goal is to arrive at comprehensible, meaningful results from extensive analysis of information. 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

307

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE HOW IS DATA MINING DIFFERENT FROM OTHER ANALYSIS METHODS? Data mining differs from other analysis methods in several ways. A significant distinction between data mining and other analytical tools is in the approaches used in exploring the data. Many of the available analytical tools support a verification-based approach in which the user hypothesizes about specific data relationships and then uses the tools to verify or refute those presumptions. This verification-based process stems from the intuition of the user to pose the questions and refine the analysis based on the results of potentially complex queries against a database. The effectiveness of this analysis depends on several factors, the most important of which are the ability of the user to pose appropriate questions, the capability of tools to return results quickly, and the overall reliability and accuracy of the data being analyzed. Other available analytical tools have been optimized to address some of these issues. Query and reporting tools, such as those used in data mart or warehouse applications, let users develop queries through point-and-click interfaces. Statistical analysis packages, like those used by many insurance and actuarial firms, provide the ability to explore relationships among a few variables and determine statistical significance against demographic sets. Multidimensional online analytical processing (OLAP) tools enable a fast response to user inquiries through their ability to compute hierarchies of variables along dimensions such as size, color, or location. Data mining, in contrast to these analytical tools, uses what are called discovery-based approaches, in which pattern matching and other algorithms are employed to determine the key relationships in the data. Data mining algorithms can look at numerous multidimensional data relationships concurrently, highlighting those that are dominant or exceptional. That is, true data mining tools uncover trends, patterns, and relationships automatically. As mentioned, many other types of analytical methods rely on user intuition or the ability to pose the “right kind” of question. In summary, analytical tools — query tools, statistical tools, and OLAP — and the results they produce are all user driven, while data mining is data driven. THE NEED FOR DATA MINING As discussed, traditional methods involve the decision maker hypothesizing the existence of information of interest, converting that hypothesis to a query, posing that query to the analysis tool, and interpreting the returned results with respect to the decision being made. For example, the marketing director must hypothesize that notebook-owning, 18- to 24-yearold customers are likely to purchase the upcoming software release. After posing the query, it is up to the individual to interpret the returned results and determine if the list represents a good group of product prospects. The 308

Data Mining: Exploring the Corporate Asset quality of the extracted information is based on the user’s interpretation of the posed query’s results. The intricacies of data interrelationships —as well as the sheer size and complexity of modern data stores — necessitate more advanced analysis capabilities than those provided by verification-based data mining approaches. The ability to automatically discover important information hidden in the data and then present it in the appropriate way is a critical complementary technology to verification-based approaches. Tools, techniques, and systems that perform these automated analysis tasks are referred to as “discovery based.” Discovery-based systems applied to the data available to the marketing director may identify many groups, including, for example, 18- to 24-year-old male college students with laptops, 24- to 30-year-old female software engineers with both desktop and notebook systems, and 18- to 24-year-old customers planning to purchase portable computers within the next six months. By recognizing the marketing director’s goal, the discovery-based system can identify the software engineers as the key target group by spending pattern or other variable. In sum, verification-based approaches, although valuable for quick, high-level decision support such as historical queries about product sales by fiscal quarter, are insufficient. For companies with very large and complex databases, discovery-based data mining approaches must be implemented in order to realize the complete value that data offers. THE PROCESS OF MINING DATA Selection and Extraction Constructing an appropriate database to run queries against is a critical step in the data mining process. A marketing database may contain extensive tables of data from purchasing records and lifestyle data, to more advanced demographic information such as census records. Not all of this data is required on a regular basis and thus should be filtered out of the query tables. Additionally, even after selecting the desired database tables, it is not always necessary to mine the contents of the entire table to identify useful information under certain conditions and for certain types of data mining techniques. For example, when creating a classification or prediction model, it may be adequate to first sample the table and then mine the sample. This is usually a faster and less expensive operation. Essentially, potential sources of data (e.g., census data, sales records, mailing lists, and demographic databases) should be explored before meaningful analysis can take place. The selected data types can be orga309

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE nized along multiple tables. Developing a sound model involves combining parts of separate tables into a single database for mining purposes. Data Cleansing and Transformation Once the database tables have been selected and the data to be mined has been identified, it is usually necessary to perform certain transformations and cleansing routines on the data. Data cleansing or transformations are determined by the type of data being mined as well as the data mining technique being used. Transformations vary from conversions of one type of data to another (such as numeric data to character data or currency conversions), to more advanced transformations (such as the application of mathematical or logical functions on certain types of data). Cleansing, on the other hand, is used to ensure the reliability and accuracy of results. Data can be verified, or cleansed, in order to remove duplicate entries, attach real values to numeric or alphanumeric codes, and omit incomplete records. “Dirty” (or inaccurate) data in the mining data store must be avoided if results are to be accurate and useful. Many data mining tools include a system log or some other graphical interface tool to identify erroneous data in queries; however, every effort should be made prior to this stage to ensure that incorrect data is not included in the mining database. If errors are not discovered, lower quality results and, due to this, lesser quality decisions will result. Mining, Analysis, and Interpretation The clean and transformed data is subsequently mined using one or more techniques to extract the desired type of information. For example, to develop an accurate classification model that predicts whether or not a customer will upgrade to a new version of a software package, a decision maker must first use clustering to segment the customer database. Next, they will apply rules to automatically create a classification model for each desired cluster. While mining a particular dataset, it may be necessary to access additional data from a data mart or warehouse, and perform additional transformations of the original data. (The terms and methods mentioned above are defined and discussed later in this chapter.) The final step in the data mining process is analyzing and interpreting results. The extracted and transformed data is analyzed with respect to the user’s goal, and the best information is identified and presented to the decision maker through the decision support system. The purpose of result interpretation is to not only graphically represent the output of the data mining operation, but also to filter the information that will be presented through the decision support system. For example, if the goal is to develop a classification model during the result interpretation step, the robustness of the extracted model is tested using one of the established 310

Data Mining: Exploring the Corporate Asset methods. If the interpreted results are not satisfactory, it may be necessary to repeat the data mining step, or to repeat other steps. What this really speaks to is the quality of the data. The information extracted through data mining must be ultimately comprehensible. For example, it may be necessary, after interpreting the results of a data mining operation, to go back and add data to the selection process or to perform a different calculation during the transformation step. TECHNIQUES Classification Classification is perhaps the most often employed data mining technique. It involves a set of instances or predefined examples to develop a model that can classify the population of records at large. The use of classification algorithms begins with a sample set of preclassified example transactions. For a fraud detection application, this would include complete records of both fraudulent and valid transactions, determined on a record-by-record basis. The classifier-training algorithm uses these preclassified examples to determine the set of parameters required for proper identification. The algorithm then encodes these parameters into a model called a classifier, or classification model. The approach affects the decision-making capability of the system. Once an effective classifier is developed, it is used in a predictive mode to classify new records automatically into these same predefined classes. In the fraud detection case cited above, the classifier would be able to identify probable fraudulent activities. Another example would involve a financial application in which a classifier capable of identifying risky loans could be used to aid in the decision of whether or not to grant a loan to an individual. Association Given a collection of items and a set of transactions, each of which contain some number of items from a given collection, an association is an operation against this set of records which returns affinities that exist among the collection of items. “Market basket” analysis is a common application that utilizes association techniques. Market basket analysis involves a retailer running an association function over the point of sales transaction log. The goal is to determine affinities among shoppers. For example, in an analysis of 100,000 transactions, association techniques could determine that “20 percent of the time, customers who buy a particular software application also purchase the complimentary add-on software pack.” In other words, associations are items that occur together in a given event or transaction. Association tools discover rules. 311

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE Another example of the use of association discovery could be illustrated in an application that analyzes the claim forms submitted by patients to a medical insurance company. The goal is to discover patterns among the claimants’ treatment. Assume that every claim form contains a set of medical procedures that were performed to the given patient during one visit. By defining the set of items to be the collection of all medical procedures that can be performed on a patient and the records that correspond to each claim form, the application can find, using the association technique, relationships among medical procedures that are often performed together. Sequence-Based Traditional “market basket” analysis deals with a collection of items as a part of a point-in-time transaction. A variant of this occurs when there is additional information to tie together a sequence of purchases. An account number, a credit card, or a frequent shopper number are all examples of ways to track multiple purchases in a time series. Rules that capture these relationships can be used, for example, to identify a typical set of precursor purchases that might predict the subsequent purchase of a specific item. In our software case, sequence-based mining could determine the likelihood of a customer purchasing a particular software product to subsequently purchase complimentary software or a hardware device (such as a joystick or a video card). Sequence-based mining can be used to detect the set of customers associated with frequent buying patterns. Use of sequence-based mining on the set of insurance claims previously discussed can lead to the identification of frequently occurring medical procedures performed on patients. This can then be harnessed in a fraud detection application to detect cases of medical insurance fraud. Clustering Clustering segments a database into different groups. The goal is to find groups that differ from one another as well as the similarities among members. The clustering approach assigns records with a large number of attributes into a relatively small set of groups, or “segments.” This assignment process is performed automatically by clustering algorithms that identify the distinguishing characteristics of the dataset and then partition the space defined by the dataset attributes along natural “boundaries.” There is no need to identify the groupings desired or the attributes that should be used to segment the dataset. Clustering is often one of the first steps in data mining analysis. It identifies groups of related records that can be used as a starting point for 312

Data Mining: Exploring the Corporate Asset exploring further relationships. This technique supports the development of population segmentation models, such as demographic-based customer segments. Additional analyses using standard analytical and other data mining techniques can determine the characteristics of these segments with respect to some desired outcome. For example, the buying habits of multiple population segments might be compared to determine which segments to target for a new marketing campaign. Estimation Estimation is a variation of the classification technique. Essentially, it involves the generation of scores along various dimensions in the data. For example, rather than employing a binary classifier to determine whether a loan applicant is approved or classified as a risk, the estimation approach generates a credit-worthiness “score” based on a pre-scored sample set of transactions. That is, sample data (complete records of approved and risk applicants) is used as samples in determining the worthiness of all records in a dataset. APPLICATIONS OF DATA MINING Data mining is now being applied in a variety of industries, ranging from investment management and retail solutions to marketing, manufacturing, and healthcare applications. It has been pointed out that many organizations, due to the strategic nature of their data mining operations, will not even discuss their projects with outsiders. This is understandable, due to the importance and potential that successful solutions offer organizations. However, there are several well-known applications that are proven performers, including customer profiling, market basket analysis, and fraud analysis. In customer profiling, characteristics of good customers are identified with the goals of predicting who will become one, and helping marketing departments target new prospects. Data mining can find patterns in a customer database that can be applied to a prospect database so that customer acquisition can be appropriately targeted. For example, by identifying good candidates for mail offers or catalogs, direct-mail marketing managers can reduce expenses and increase their sales generation efforts. Targeting specific promotions to existing and potential customers offers similar benefits. Market-basket analysis helps retailers understand which products are purchased together or by an individual over time. With data mining, retailers can determine which products to stock in which stores, as well as how to place them within a store. Data mining can also help assess the effectiveness of promotions and coupons. 313

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE And finally, fraud detection is of great benefit to credit card companies, insurance firms, stock exchanges, government agencies, and telecommunications firms. The aggregate total for fraud losses in today’s world is enormous; but with data mining, these companies can identify potentially fraudulent transactions and contain damage. Financial companies use data mining to determine market and industry characteristics as well as predict individual company and stock performance. Another interesting niche application is in the medical field. Data mining can help predict the effectiveness of surgical procedures, diagnostic tests, medication, and other services. SUMMARY More and more companies are beginning to realize the potential for data mining within their organizations. However, unlike the “plug-and-play,” outof-the-box business solutions that many have become accustomed to, data mining is not a simple application. It involves a great deal of forethought, planning, research, and testing to ensure a sound, reliable, and beneficial project. It is also important to remember that data mining is complementary to traditional query and analysis tools, data warehousing, and data mart applications. It does not replace these useful and often vital solutions. Data mining enables organizations to take full advantage of the investment they have made and are currently making in building data stores. By identifying valid, previously unknown information from large databases, decision makers can tap into the unique opportunities that data mining offers.

314

Chapter 26

Data Conversion Fundamentals Michael Zimmer

When systems developers build information systems, they usually do not start with a clean slate. Often, they are replacing an existing application. They must always determine if the existing information should be preserved. Usually, the older information is transferred to the new system — a process known as data conversion in previous days, but now more often called “extract, transform, and load” (ETL). This ETL process may be a onetime transfer of data from an old system to a new system, or part of an ongoing process such as is found in data warehouse and data mart applications. In fact, any time that data interoperability is an issue, similar considerations apply. Even business-to-business (B2B) and electronic data interchange (EDI) have some similar characteristics, particularly with regard to the issue of definition of common semantics, and applying business rules against the data to ensure quality. ELT can involve moving data from flat file systems to relational database management systems (RDBMS). It could also be related to changing from systems with loose constraints to new systems with tight constraints. Over the past decade or so, various tools for ETL have appeared as data warehousing has exploded. In addition, newer technologies such as intranets, XML, XSLT, and related standards have proven to be useful. This chapter focuses on laying the groundwork for successfully executing a data conversion effort the first time around. It is assumed in this chapter that the principles of conceptual data modeling are followed. For expository purposes, it is assumed that relational database technology is employed but, in fact, the methods are essentially independent of technology. At the logical level, the terms “entity set,” “entity,” and “attribute” are used in place of the terms “file,” “record,” and “field.” At the physical level, the terms “table,” “row,” and “column” are used instead of “file,” “record,” and “field.” The members of IS (information systems) engaged in the data conversion effort are referred to as the data conversion team (DCT). 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

315

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE COMMON PROBLEMS WITH DATA The difficulties of a data conversion effort are almost always underestimated. The conversion usually costs many times more than originally anticipated. This is invariably the result of an inadequate understanding of the cost and effort required to correct errors in the data. The quality of the existing data is typically much worse than the users and development team anticipate. Data may violate key business rules and be incomplete. Problems with data can result from missing information and mismatches between the old model (which is often only implicit) and the new model (which is usually explicitly documented). Problems also result if the conversion effort is started too late in the project and is under-resourced. Costs and Benefits of Data Conversion Before embarking on data conversion, the DCT should decide whether data really needs to be converted and if it is feasible to abandon the noncurrent data. In some situations, starting fresh is an option. The customers may decide that the costs of preserving and correcting old information exceeds the benefits expected. Often, they will want to preserve old information but may not have the resources to correct historical errors. Of course, with a data warehouse project, it is a given that the data will be converted. Preservation of old information is critical. The Cost of Not Converting. The DCT should first demonstrate the cost of permitting erroneous information into the new database. It is a decision to be made by user management. In the long run, permitting erroneous data into the new application will usually be costly. The data conversion team should explain what the risks are in order to justify the costs for robust programming and data error correction. The Costs of Converting. It is no easier to estimate the cost of a conversion effort than to estimate the cost of any other development effort. The special considerations are that a great deal of manual intervention and subsequently extra programming may be necessary to remedy data errors. A simple copy procedure usually does not serve the organization’s needs. If the early exploration of data quality and robust design and programming for the conversion routines is skimped on, the IS group will generally pay for it.

STEPS IN THE DATA CONVERSION PROCESS In even the simplest information technology (IT) systems development projects, the efforts of many players must come together. At the managerial and employee levels, certain users should be involved, in addition to 316

Data Conversion Fundamentals the applications development group, data administration, database administration, computer operations, and quality assurance. The responsibilities of the various groups must be clearly defined. In the simplest terms, data conversion involves the following steps: • • • • • • • • • •

Determine if conversion is required. Plan the conversion. Determine the conversion rules. Identify problems. Write down the requirements. Correct the data. Program the conversion. Run the conversion. Check the audit reports. Institutionalize the results.

Determining if Conversion Is Required In some cases, data does not need to be converted. IS may find that there is no real need to retain old information. The data could be available elsewhere, such as on microfiche. Another possibility is that the current data is so erroneous, incomplete, or inadequate that there is no reason to keep it. The options must be presented to the clients so that they can determine the best course of action. Planning the Conversion and Determining the Conversion Rules Once the DCT and the client have accepted the need for a conversion, the work can be planned in detail. The planning activities for conversion are standard in most respects and are typical of development projects. Beyond sound project management, it is helpful for the DCT to keep in mind that error correction activities may be particularly time-consuming. Determination of the conversion rules consists of the following steps, usually performed in sequence, with any necessary iteration: • • • • • • •

Analyze the old physical data model. Conduct a preliminary investigation on data quality. Analyze the old logical data model. Analyze the new logical data model. Analyze the new physical data model. Determine the data mapping. Determine how to treat missing information.

Analyze the Old Physical Data Model. Some published development methods imply that development starts with a blank slate. As a result, analysis of the existing system is neglected. The reverse-engineering paradigm 317

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE asserts that the DCT should start with the existing computer application to discern the business rules. Data conversion requires this approach for data analysis. The DCT can look at old documentation, database definitions, file descriptions, and record layouts to understand the current physical data model. Conduct a Preliminary Investigation of Data Quality. Without some understanding of data structures for the current application, it is not possible to look at the quality of the data. To examine the quality of the data, the DCT can run existing reports, do online queries, and if possible, quickly write some fourth-generation language programs to examine issues such as referential, primary key, and domain integrity violations that the users might never notice. When the investigation is done, the findings should be formally documented. Analyze the Old Logical Data Model. When the physical structure of the data is understood, it can be represented in its normalized logical structure. This step, although seemingly unnecessary, allows the DCT to specify the mapping in a much more reliable fashion. The results should be documented with the aid of an entity-relationship diagram accompanied by dictionary descriptions. Analyze the New Physical Data Model. The new logical model should be transformed into a physical representation. If a relational database is being used, this may be a simple step. Once this model is done, the mapping can be specified. Determine the Data Mapping. This step is often more difficult than it might seem initially. Often, there are cases where the old domain must be transformed into a new one; an old field is split into two new ones; two old fields become one new one; or multiple records are looked at to derive a new one. There are many ways of reworking the data, and an unlimited number of special cases may exist. Not only are the possibilities for mapping numerous and complex, but in some cases it is not possible to map to the new model because key information was not collected in the old system. Determine How to Treat Missing Information. It is common when doing conversion to discover that some of the data to populate the new application is not available and that there is no provision for it in the old database. It may be available elsewhere as manual records, or it may never have been recorded at all.

Sometimes, this is only an inconvenience — dummy values can be put in certain fields to indicate that the value is not known. In the more serious case, the missing information would be required to create a primary key or 318

Data Conversion Fundamentals a foreign key. This can occur when the new model is significantly different from the old. In this case, the dummy value strategy may be appropriate but it must be fully explained to the client. Identify Problems Data problems can only be detected after both the old data structure and the new model are fully understood. A full analysis of the issue includes looking for erroneous information, missing information, redundancies, inconsistencies, missing keys, and any other problem that will make the conversion difficult or impossible without a lot of manual intervention. Any findings should be documented and brought to the attention of the client. Information must be documented in a fashion that makes sense to the client. Once the problems have been identified, the DCT can help the client identify a corrective strategy. The client must understand why errors have been creeping into the systems. The cause is usually a mixture of problems with the old data structure, problems with the existing input system, and data entry problems that have been ongoing. It may be that the existing system does not properly reflect the business. The users may have been working around the system’s deficiencies for years in ways that violated its integrity. In any case, the new system should be tighter than the old one at the programming and database level, it should properly reflect the business, and the new procedures should not result in problems with usability or data quality. Document the Requirements After the initial study of the conversion is done, the findings should be documented. Some of this work will have been done as part of the regular system design. There must also be a design for the conversion programs, whether it is a one-time or an ongoing activity. First-time as well as ongoing load requirements must be examined. Estimates should include the time necessary to extract, edit, correct, and upload data. Costs for disk storage and CPUs should also be projected. In addition, the sizing requirements should be estimated well in advance of hardware purchases. Correct the Data The client may want to correct the data before the conversion effort begins, or may be willing to convert the data over time. It is best to make sure that the data that is converted is error-free, at least with respect to the formal integrity constraints defined for the new model. 319

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE If erroneous information is permitted into the new system, it will probably be problematic later. The correction process may involve using the existing system to make changes. Often, the types of errors that are encountered may require some extra programming facilities. Not all systems provide all of the data modification capabilities that might be necessary. In any case, this step can sometimes take months of effort and requires a mechanism for evaluating the success of the correction effort. Program the Conversion The conversion programs should be designed, constructed, and tested with the same discipline used for any other software development. Although the number of workable designs is large, there are a few helpful rules of thumb: • The conversion program should edit for all business rule violations and reject nonconforming information. The erroneous transactions should go to an error file, and a log of the problem should be written. The items in error should be unambiguously identified. The soundest course is to avoid putting incorrect data into the new system. • The conversion programs must produce an audit trail of the transactions processed. This includes control totals, checksums, and date and time stamps. This provides a record of how the data was converted after the job is done. • Tests should be as rigorous as possible. All design documents and code should be tested in a structured fashion. This is less costly than patching up problems caused by a data corruption in a million-record file. • Provisions should be made for restart in case of interruption in the run. • It should be possible to roll back to some known point if there are errors. • Special audit reports should be prepared to run against the old and new data to demonstrate that the procedures worked. This reporting can be done in addition to the standard control totals from the programs. Run the Conversion It may be desirable to run a test conversion to populate a test database. Once the programs are ready and volume testing has been done, it is time for the first conversion, which may be only one of many. If this is a data warehouse application, the conversion could be an ongoing effort. It is important to know how long the initial loads will take so that scheduling can be done appropriately. The conversion can then be scheduled for an opportune cutover time. The conversion will go smoothly if 320

Data Conversion Fundamentals contingencies are built in and sound risk management procedures are followed. There may be a number of static tables, perhaps used for code lookup, that can be converted without as much fanfare, but the main conversion will take time. At the time planned for cutover, the old production system can be frozen from update or run in parallel. The production database can then be initialized and test records removed (if any have been created). The conversion and any verification and validation routines can be run at this point. Check the Audit Reports Once the conversion is finished, special audit reports should be run to prove that it worked, to check control totals, and deal with any problems. It may be necessary to roll back to the old system if problems are excessive. The new application should not be used until it is verified that the conversion was correct; otherwise, a lot of work could be lost. Institutionalize the Results In many cases, as in data warehousing, conversion will be a continuous process and must be institutionalized. Procedural controls are necessary to make sure that the conversion runs on schedule, results are checked rigorously, rejected data is dealt with appropriately, and failed runs are handled correctly. DATA QUALITY A strategy to identify data problems early in the project should be in place, although details will change according to the project. A preliminary investigation can be done as soon as the old physical data model has been determined. It is important to document the quality of the current data, but this step may require programming resources. Customers at all levels should be notified if there are data quality issues to be resolved. Knowledge of the extent of data quality problems may influence the users’ decision to convert or abandon the data. Keeping the Data Clean If the data is corrected on a one-time basis, it is important to ensure that more erroneous data is not being generated by some faulty process or programming. There may be a considerable time interval between data correction and conversion to the new system. Types of Data Abnormalities There may be integrity problems in the old system. For example, there may be no unique primary key for some of the old files, which almost guaran321

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE tees redundancy in the data. This violation of entity integrity can be quite serious. To ensure entity integrity in the new system, the DCT will have to choose which of the old records are to be accepted as the correct ones to move into the new system. It is helpful for audit routines to report on this fact. In addition, in the new system, it will be necessary to devise a primary key, which may not be available in the old data. Uniqueness. In many cases, there are other fields that should also be unique and serve as an alternate primary key. In some cases, even if there is primary key integrity, there are redundancies in other alternative keys, which again create a problem for integrity in the new system. Referential Integrity. The DCT should determine whether the data correctly reflects referential integrity constraints. In a relational system, tables are joined together by primary key/foreign key links. The information to create this link may not be available in the old data. If records from different files are to be matched and joined, it should be determined whether the information exists to correctly do the join (i.e., a unique primary key and a foreign key). Again, this problem needs to be addressed prior to conversion. Domain Integrity. The domain for a field imposes constraints on the values that should be found there. IS should determine if there are data domains that have been coded into character or numeric fields in an undisciplined and inconsistent fashion. It should further be determined whether there are numeric domains that have been coded into character fields, perhaps with some nonnumeric values. There may be date fields that are just text strings and the dates may be in any order. A common problem is that date or numeric fields stored as text may contain absurd values with entirely the wrong data type.

Another determination that should be made is whether the domain coding rules have changed over time and whether they have been re-coded. It is common for coded fields to contain codes that are no longer in use, and often codes that never were in use. Also, numeric fields may contain out-ofrange values. Composite domains could cause problems when trying to separate them for storage in multiple fields. The boundaries for each subitem may not be in fixed columns. There may be domains that incorrectly model internal hierarchy. This is common in old-style systems and makes data modeling difficult. There could be attributes based on more than one domain. Not all domain problems will create conversion difficulties but they may be problematic later 322

Data Conversion Fundamentals if it cannot be proven that these were preexisting anomalies and not a result of the conversion efforts. Wrong Cardinality. The old data could contain cardinality violations. For example, the structure may say that each employee has only one job record, but in fact some may have five or six. These sorts of problems make database design difficult. Wrong Optionality. Another common problem is the absence of a record when one should be there. It may be a rule that every employee has at least one record of appointment, but for some reason one percent of old records show no job for an employee. The client must resolve this inconsistency. Orphaned Records. In many cases, a record is supposed to refer back to some other record by making reference to the key value for that other record. In many badly designed systems, there is no key to refer back to, at least not one that uniquely identifies the record. Technically, there is no primary key. In some cases, there is no field available to make this reference, which means that there is no foreign key. In other cases, the key structure is fine but the actual record referred to does not exist. This is a problem with referential integrity. This record without a parent is called an orphan. Inconsistency and Redundancy Combined. If each data item is fully determined by its key, there will be no undesirable redundancy and the new database will be normalized. If attempts at normalization are made where there is redundant information, the DCT will be unable to make consistent automated choices about which of the redundant values to select for the conversion.

On badly designed systems, there will be a great deal of undesirable redundancy. For example, a given fact may be stored in multiple places. This type of redundancy wastes disk storage, but may in some cases permit faster queries. The problem is that without concerted programming efforts, this redundant information is almost certainly going to become inconsistent. If the old data has confusing redundancies, it is important to determine whether they are due to historical changes in the business rules or historical changes in the values of fields and records. The DCT should also determine whether the redundancies are found across files or within individual files across records. There may be no way to determine which data is current, and an arbitrary choice will have to be made. If the DCT chooses to keep all the information to reflect the changes over time, it cannot be stored correctly because the date information will not be in the system. This is an extremely common problem. 323

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE Missing Information. When dealing with missing information, it is helpful

to determine whether: • • • • •

The old data is complete. Mandatory fields are filled in. All necessary fields are available in the files. All records are present. Default or dummy values can be inserted where there is missing information.

Date Inconsistencies. When examining the conversion process, it is helpful to determine whether:

• The time dimension is correctly represented. • The data spans a long enough time period. • The data correctly reflects the state of the business for the time at which it was captured. • All necessary date fields are available to properly model the time dimension. • Dates are stored with century information. • Date ranges are in the correct sequence within a given record. • Dates are correct from record to record. Miscellaneous Inconsistencies. In some fields, there will be values derived from other fields. A derived field might be computed from other fields in the same record or may be a function of multiple records. The derived fields may be stored in an entirely different file. In any case, the derived values may be incorrect for the existing data. Given this sort of inconsistency, it should be determined which is correct — the detail or the summary information. Intelligent Keys. An intelligent key results from a fairly subtle data modeling problem. For example, there are two different independent items from the real world, such as Employee and Department, where the Employee is given a key that consists in part of the Department key. The implication is that if a Department is deleted, the employee record will be orphaned; and if an Employee changes Departments, the Employee key will have to change. When doing a conversion, it would be desirable to remove the intelligent key structure. Other Problems. Other problems with the old data also exist that cannot be easily classified. These problems involve errors in the data that cannot be detected except by going back to the source, or violations of various arcane constraints that have not been programmed as edit checks in the existing system. There may be special rules that tie field values to multiple records, multiple fields, or multiple files. Although they may not have a 324

Data Conversion Fundamentals practical implication for the conversion effort, if these problems become obvious, they might be falsely attributed to the conversion routines. THE ERROR CORRECTION PROCESS The data correction effort should be run as part of a separate subproject. The DCT should determine whether the resources to correct the data can be made available. A wholesale commitment from the owners of the data will be required, and probably a commitment of programming resources as well. Error correction cannot be done easily within the context of rapid applications development (RAD) or many of the agile methods. Resources for the Correction Effort Concerning resources for the correction effort, the best-case scenario would ensure that: • Resources are obtained from the client if a major correction effort is required. • Management pays adequate attention to the issue if a data-quality problem is identified. • The sources of the problem will be identified in a fair and nonjudgmental manner if a data quality problem is identified. Choices for Correction The effort required to write an edit program to look for errors is considerable, and chances are good that this will be part of the conversion code and not an independent set of audit programs. Some of the errors may be detected before conversion begins, but it is likely that many of the problems will be found during the conversion run. Once data errors are discovered, data can be copied as is, corrected, or abandoned. The conversion programs should reject erroneous transactions and provide reports that explain why data was rejected. If the decision is made to correct the data, it will probably have to be reentered. Again, in some cases, additional programming can help remedy the problems. Programming for Data Correction Some simple automated routines can make the job of data correction much easier. If they require no manual intervention, it could be advantageous to simply put them into the main conversion program. However, the program may require that a user make the decision. If the existing data entry programs are not adequate for large-scale data correction efforts, some additional programs might have to be written for 325

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE error repair. For example, the existing system may not allow the display of records with a referential integrity problem, which are probably the very records that need correction. Custom programming will be required to make the change. SPECIFY THE MAPPING Often, crucial information needed for the conversion will be missing. If the old system can accommodate the missing information, it may be a matter of keying it in from original paper records. However, the original information may not be available anymore, or it may never have been collected. In that case, it may be necessary to put in special markers to show that the information is not available. Model Mismatches It can be difficult to go from a nonnormalized structure to a normalized structure because of the potential for problems in mapping from old to new. Many problems are the result of inconsistent and redundant data, a poor key structure, or missing information. If there is a normalized structure in the old system, there probably will not be as many difficulties. Other problems result from changed assumptions about the cardinality of relationships or actual changes in the business rules. Discovered Requirements The requirements of a system are almost never fully understood by the user or the developer prior to constructing the system. Some of the data requirements do not become clear until the test conversions are being run. At that point, it may be necessary to go back and revisit the whole development effort. Standard change and scope control techniques apply. Existing Documentation Data requirements are rarely right the first time because the initial documentation is seldom correct. There may be abandoned fields, mystery fields, obscure coding schemes, or undocumented relationships. If the documentation is thorough, many data conversion pitfalls can be avoided. Possible Mapping Patterns The mapping of old to new is usually very complex. There seems to be no useful canonical scheme for dealing with this set of problems. Each new conversion seems to consist of myriad special cases. In the general case, a given new field may depend on the values found in multiple fields contained in multiple records of a number of files. This works the other way as 326

Data Conversion Fundamentals well — one field in an old record may be assigned to different fields or even to different tables, depending on the values encountered. If the conversion also requires intelligent handling of updates and deletes to the old system, the problem is complicated even further. This is true when one source file is split into several destination files, and at the same time, one destination file receives data from several source files. Then, if just one record is deleted in a source file, some fields will have to be set to null in the destination file, but only those coming from the deleted source record. This method, however, may violate some of the integrity rules in the new database. It may be best to specify the mapping in simple tabular and textual fashion. Each new field will have the corresponding old fields listed, along with any special translation rules required. These rules could be documented as decision tables, decision trees, pseudocode, or action diagrams. Relational Mathematics In database theory, it is possible to join together all fields in a database in a systematic manner and create what is called the “universal relation.” Although this technique has little merit as a scheme for designing or implementing a database, it may be a useful device for thinking about the mapping of old to new. It should be possible to specify any complex mapping as a view based on the universal relation. The relational algebra or the relational calculus could be used as the specification medium for detailing the rules of the mapping in a declarative fashion. DESIGN THE CONVERSION Possibility of Manual Data Entry Before starting to design a computer program or a set of programs, reentering the data manually from source records should be considered a possibility. The effort and increased probability of random errors associated with manual data entry should be contrasted with the cost of developing an automated solution. Extra Space Requirements In a conversion, it will be necessary to have large temporary files available. These could double the amount of disk space required for the job. If it is not possible to provide this extra storage, it will be necessary to ensure that the conversion plan does not demand extra space. This has become less and less of a problem over the years, with the decreasing cost of storage, but could become an issue for large volumes of data. 327

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE Choice of Language The criteria for programming languages is not going to be too different from that used in any other application area. The programming language should be chosen according to the skills of the IS team and what will run on the organization’s hardware, or what is used by the purchased ETL software. The most appropriate language will allow error recovery, exception handling, control totals reporting, checkpoint and restart capabilities, full procedural capability, and adequate throughput. Most third-generation languages are sufficient if an interface to the source and target databases or file systems is available. Various classes of programs could be used, with different languages for each. For example, the records can be extracted from the old database with one proprietary product, verified and converted to the new layout with C, and input into the new database with a proprietary loader. SQL as a Design Medium The SQL language should be powerful enough to handle any data conversion job. The problem with SQL is that it has no error-handling capabilities and cannot produce a satisfactory control totals report as part of the update without going back and re-querying the database in various ways. Despite the deficiencies of SQL as a robust data conversion language, it may be ideal for specifying the conversion rules. Each destination field could have a corresponding SQL fragment that gave the rules for the mapping in a declarative fashion. The use of SQL as a design medium should lead to a very tight specification. The added advantage is that it translates to an SQL program very readily. Processing Time IS must have a good estimate for the amount of elapsed time and CPU time required to do the conversion. If there are excessive volumes of data, special efforts will be required to ensure adequate throughput. These efforts could involve making parallel runs, converting overnight and over weekends, buying extra-fast hardware, or fine-tuning programs. These issues are not unique to conversions but they must not be neglected to avoid surprises on the day of cutover to the new system. These issues are especially significant when there are large volumes of historical data for an initial conversion, even if ongoing runs will be much smaller. Interoperability There is a strong possibility that the old system and the new system will be on different platforms. There should be a mechanism for transferring the 328

Data Conversion Fundamentals data from one to the other. Tape, disk, or a network connection could be used. It is essential to provide some mechanism for interoperability. In addition, it is important to make sure that the media chosen can support the volumes of data and provide the necessary throughput. Routine Error Handling The conversion routine must include sufficient mechanisms for enforcing all business rules. When erroneous data is encountered, there might be a policy of setting the field to a default value. At other times, the record may be rejected entirely. In either case, a meaningful report of each error encountered and the resultant actions should be generated. It will be best if erroneous records are sent to an error file. There may be some larger logical unit of work than a record. If so, the larger unit should be sent to the error file and the entire transaction rolled back. Control Totals Every run of the conversion programs should produce control totals. At a minimum, there should be counts for every input record, every rejected recorded, every accepted record, and every record inserted into each output file or table. Finer breakdowns are desirable for each of these types of inputs and outputs. Every conversion run should be date and time stamped with start and end times, and the control report should be filed after inspection. Special Requirements for Data Warehousing Data warehousing assumes that the conversion issue arises on a routine, periodic basis. All of the problems that arise in a one-time conversion must be dealt with for an initial load, and then must be dealt with again for the periodic update. In a data warehouse situation, there will most likely be changes to source records that must be reflected into the data warehouse files. As discussed previously, there may be some complex mapping from old to new, and updates and deletes will greatly increase the complexity. There must be a provision for add, change, and delete transactions. A change transaction can often be handled as a paired delete and add, in some cases simplifying the programming. RECOVERY FROM ERROR Certain types of errors, such as a power failure, will interrupt the processing. If the system goes out in the middle of a 20-hour run, there has to be some facility for restarting appropriately. Some sort of checkpoint and 329

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE restart mechanisms are desirable. The operating system may be able to provide these facilities. If not, there should be an explicit provision in the design and procedures for dealing with this possibility. In some cases, it may be necessary to ensure that files are backed up prior to conversion. Audit Records After the data has been converted, there must be an auditable record of the conversion. This is also true if the conversion is an ongoing effort. In general, the audit record depends on the conversion strategy. There may be counts, checksums (i.e., row and column), or even old-versus-new comparisons done with an automated set of routines. These audit procedures are not the same as the test cases run to verify that the conversion programs worked. They are records produced when the conversions are run. CONCLUSION Almost all IS development work involves conversion of data from an old system to a new application. This is seldom a trivial exercise, and in many projects it is the biggest single source of customer dissatisfaction. The conversion needs to be given serious attention, and the conversion process needs to be planned as carefully as any other part of the project. Old applications are fraught with problems, and errors in the data will be common. The more tightly programmed the new application, the more problematic the conversion. It is increasingly common to make the conversion part of an ongoing process, especially when the operational data is in one system and the management information in another. Any data changes are made on the operational system and then, at periodic intervals, copied to the other application. This is a key feature of the data warehouse approach. All of the same considerations apply. In addition, it will be important to institutionalize the procedures for dealing with conversion. The conversion programs must be able to deal with changes to the operational system by reflecting them in the data warehouse. Special care will be required to design the programs accordingly.

330

Chapter 27

Service Level Management Links IT to the Business Janet Butler

Downtime is becoming unacceptably expensive as businesses increasingly depend on their information technology (IT) services for mission-critical applications. As user availability and response time requirements increase dramatically, service level management (SLM) is becoming the common language of choice for communication between IT and end users. In addition, to foster the growing focus on the user, SLM is moving rapidly into the application arena, turning from its traditional emphasis on system and network resources. E-BUSINESS DRIVES SERVICE LEVEL MANAGEMENT Businesses have long viewed IT as an overhead operation and an expense. In addition, when IT was a hidden function dealing with internal customers, it could use ad hoc, temporary solutions to address user service problems. Now, with electronic business gaining importance, IT is becoming highly visible as a front door to the business. However, while Internet visibility can prove highly beneficial and lucrative to businesses, it can also backfire. Amazon, eBay, and Schwab all learned this the hard way when their service failures hit The Wall Street Journal’s front page. And few other organizations would like their CEOS to read about similar problems. As such cases illustrate, downtime on mission-critical applications can cost businesses tens of thousands or millions of dollars per day. In the financial industries, for example, downtime can cost $200,000 per minute, according to one industry analyst. And poor end-to-end application response time can be nearly as costly. Not only does it cause serious tension between internal users and IT, but it creates considerable frustration for external users, and the competition may be only a mouse-click away. 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

331

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE With IT now the main entryway to the business, businesses cannot afford the perception of less-than-optimal service. They are therefore increasingly adopting service level agreements (SLAs), service level management, and quality-of-service initiatives. In fact, some organizations have developed SLAs guaranteeing availability levels exceeding 99.9 percent, or aggressive application response times — which depend on optimal end-toend performance. SLM DEFINED Service level management (SLM) is a set of activities required to measure and manage the quality of information services provided by IT. A proactive rather than reactive approach to IT management, SLM manages the IT infrastructure — including networks, systems, and applications — to meet the organization's service objectives. These objectives are specified in the SLA, a formal statement that clearly defines the services that IT will provide over a specified period of time, as well as the quality of service that users can expect to receive. SLM is a means for the lines of business and IT to set down their explicit, mutual expectations for the content and extent of IT services. It also allows them to determine in advance what steps will be taken if these conditions are not met. SLM is a dynamic, interactive process that features: • • • •

Definition and implementation of policies Collection and monitoring of data Analysis of service levels against the agreement Reporting in real-time and over longer intervals to gauge the effectiveness of current policies • Taking action to ensure service stability To implement service level management, the SLA relates the specific service-level metrics and goals of IT systems to business objectives. By linking the end-user and business process experience with what is happening in IT organizations, SLAs offer a common bridge between IT and end users, providing a clear understanding of the services to be delivered, couched in a language that both can understand. This allows users to compare the service they receive to the business process, and lets IT administrators measure and assess the level of service from end to end. SLAs may specify the scope of services, success and failure metrics, goal and performance levels, costs, penalties, time periods, and reporting requirements. The use of SLM offers businesses several benefits. It directs management toward clear service objectives and improves communication between IT 332

Service Level Management Links IT to the Business and users by enabling responsiveness to user issues. It also simplifies the management of network services because resource changes are made according to the SLA and are based on accurate user feedback. Furthermore, SLM clarifies accountability by allowing organizations to analyze service levels and evaluate IT’s effectiveness. Finally, by enabling businesses to optimize current resources and make educated decisions about the necessity for upgrades, it saves money and maximizes investments. FROM SYSTEM TO APPLICATION FOCUS In the early days of performance evaluation and capacity planning, the emphasis was on system tuning and optimization. The field first took off in the mid-1960s with the introduction of third-generation operating systems. The inefficiency of many of these systems resulted in low throughput levels and poor user response time. So, tuning and optimization were vital. As time passed, however, the vastly improved price/performance of computer systems began to limit the need for tuning and optimization. Many organizations found it cheaper to simply buy more hardware resource than to try and tune a system into better performance. Still, organizations continued to concentrate on system throughput and resource utilization, while the fulfillment of service obligations to the end user was of relatively low priority. Enter the PC revolution with its emphasis on end-user requirements. Enter also the client/server model to serve users, with its promise of speedy application development and vast amounts of information at users’ fingertips, all delivered at rapid response times. Of course, the reality does not always measure up. Now the Internet and World Wide Web have joined the fray, with their special concepts of speed and user service. Organizations are now attempting to plan according to Web time, whereby some consider a Web year to be 90 days, but WWW may well stand for “world wide wait.” So organizations are turning their focus to the user, rather than the information being collected. The service-desk/help-desk industry, for example, has long been moving toward user-oriented SLM. In the early 1990s, service-desk technology focused on recording and tracking trouble tickets. Later, the technology evolved to include problem-resolution capabilities. Next, the service desk started using technologies and tools that enabled IT to address the underlying issues that kept call volumes high. Today, organizations are moving toward business-oriented service delivery. IT is being called upon to participate as a partner in the corporate mission — which requires IT to be responsive to users/customers. 333

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE Today’s SLM requires that IT administrators integrate visibility and control of the entire IT infrastructure, with the ability to seamlessly manage service levels across complex, heterogeneous enterprise environments, using a single management interface. However, many IT organizations currently have monitors and probes in isolated parts of the network, or tools that monitor performance on certain platforms but not others. In addition, they may only receive after-the-fact reports of downtime, without proactive warnings or suggested actions. SLM requires a single, comprehensive solution whereby every facet of an IT infrastructure is brought into a single, highly automated, managed environment. This enables IT to quickly isolate and resolve problems, and act proactively in the best interest of the end user, rather than merely reacting to network or resource issues. And while comprehensive tools to do this were not available in the past, that situation is changing as the tools evolve. In this complex new environment, organizations must define IT availability in terms of applications rather than resources, and use language that both IT and business users can understand. Thus, in the past, IT’s assurance of 98 percent network availability offered little comfort to a salesman who could not book orders. It did not mean the application was running or the response time was good enough for the salesman. While SLM was formerly viewed as a lot of hot air, today’s business SLAs between IT and the line of business define what customers should expect from IT without problems. SLAs Tied to User Experience Current SLAs, then, are tied to applications in the end-user experience. With their focus on the user, rather than the information being collected, SLAs aim at linking the end user’s business process experience with what is happening in the IT organization. To this end, organizations are demanding end-user response time measurement from their suppliers, and for client/server in addition to mainframe application systems. For example, when one financial organization relocated its customer service center from a private fiber to a remote connection, call service customers were most concerned about response time and reliability. Therefore, they required a tool that provided response time monitoring at the client/server level. Similarly, a glass and plastics manufacturer sought a system to allow measurement of end-user response time as a critical component of user satisfaction when it underwent a complex migration from legacy to client/server systems. Although legacy performance over time provided sub334

Service Level Management Links IT to the Business second response time, client/server performance has only recently gained importance. To measure and improve response time in client/server environments, organizations must monitor all elements of the response time component. Application Viewpoint The application viewpoint offers the best perspective into a company’s mosaic of connections, any one of which could slow down the user. This is no news to end-user organizations. According to a 1999 survey of 142 network professionals, for example, conducted by International Network Services, 64 percent measure the availability of applications on the network to define network availability/performance. (INS, Sunnyvale, California, was a global provider of network consulting and software solutions, acquired by Lucent.) For this very complex environment, organizations must do root cause analysis if users have service problems. When IT organizations were more infrastructure oriented, service problems resulted in much fingerpointing, and organizations wasted valuable time passing the buck around before they found the domain responsible — be it the server, the network, or the connections. Now, however, as IT organizations change from infrastructure providers to service organizations, they are looking at the application level to determine what is consuming the system. SLM APPROACHES, ACTIVITIES, AND COMPONENTS Some analysts have defined four ways of measuring end-to-end response time: code instrumentation, network x-ray tools, capture/playback tools, and client capture. Code Instrumentation By instrumenting the source code in applications, organizations can define the exact start and end of business transactions, capturing the total roundtrip response times. This was the approach taken by Hewlett-Packard and Tivoli with their Application Response Measurement (ARM) application programming interface (API) initiative. For ARM's purposes, application management is defined as end-to-end management of a collection of physical and logical components that interact to support a specified business process. According to the ARM working group draft mission statement, “The purpose of the ARM API is to enable applications to provide information to measure business transactions from an end-user perspective, and the contributing components of response time in distributed applications. This information can be used to support SLAs, and to analyze response time across distributed systems.” 335

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE However, although the approach is insightful in capturing how end users see business transactions, it is also highly invasive, costly, and difficult, requiring modifications to the application source code as well as maintenance of the modifications. Many users want a nonintrusive system to measure end-user response time. Others need a breakdown by segment rather than a round-trip response time measurement. And, despite the promise, only three to five percent of ERP applications have been ARMed, or instrumented. Network X-Ray Tools A second collection approach is via x-ray tools, or network sniffers. An example is Sniffer Network Analyzer from Network Associates, Menlo Park, California. Sniffers use probes spread out in strategic locations across the network to read the packet headers, and calculate response times as seen from that probe point. Although noninvasive, this approach does not address the application layer. Because it does not see transactions in user terms, it does not capture response time from the end-user perspective. And, because the data was not designed for performance purposes, converting it into workload or user transaction-level metrics is not a trivial task. However, while the method might be considered the “hard way” to obtain performance data, it does work. Capture/Playback Tools Capture/playback tools use synthetic transactions, simulating user keystrokes and measuring the response times of these “virtual” users. While simulated transactions have a role in testing the applications’ potential performance, they do not measure the actual end-user’s response time experience. Examples are CAPBAK from Software Research, San Francisco, California, and AutoTester from AutoTester, Inc., Dallas, Texas. Client Capture Client capture is the fourth and most promising approach to measuring response time from the user’s perspective. Here, intelligent agents sit at the user’s desktop, monitoring the transactions of actual end users to capture the response time of business transactions. Client capture technology can complement network and systems management solutions, such as those from Hewlett-Packard, Tivoli, and Computer Associates. Examples of client capture products include the VitalSuite line from INS and FirstSense products from FirstSense Software, Burlington, Massachusetts. Service level management encompasses at least four distinct activities: planning, delivery, measurement, and calibration. Thus, the IT organization and its customers first plan the nature of the service to be provided. Next, the IT organization delivers according to the plan, taking calls, resolv336

Service Level Management Links IT to the Business ing problems, managing change, monitoring inventory, opening the service desk to end users, and connecting to the network and systems management platforms. The IT organization then measures its performance to determine its service delivery level based on line of business needs. Finally, IT and the business department continually reassess their agreements to ensure they meet changing business needs. Delivering service involves many separate disciplines spanning IT functional groups. These include network operations, application development, hardware procurement and deployment, software distribution, and training. SLM also involves problem resolution, asset management, service request and change management, end-user empowerment, and network and systems management. Because all these disciplines and functions must be seamlessly integrated, IT must determine how to manage the performance of applications that cross multiple layers of hardware, software, and middleware. The following general components constitute SLM, and each contributes to the measurement of service levels: • Network availability: a critical metric in managing the network • Customer satisfaction: not as easily quantified, customer satisfaction results from end-users’ network experience, so IT must manage the network in light of user expectations • Network performance • Application availability: this, along with application response time, is directly related to customer satisfaction It is difficult to define, negotiate, and measure SLAs. The metrics for network availability and performance include the availability of devices and links connected to the network, the availability of servers, the availability of applications on the network, and application response time. Furthermore, to track any SLA elements, it is necessary to measure and report on each. SLAs can include such elements as network performance, network availability, network throughput, goals and objectives, and quality-of-service metrics (e.g., mean time to repair, and installation time). Other possible SLA elements include conditions/procedures for updating or renegotiating, assignment of responsibilities and roles, reporting policies and escalation procedures, measurement of technology failures, assumptions and definitions, and trend analyses. SLAs may also include penalties for poor performance, help-desk availability, baseline data, benchmark data, application response time, measurement of process failures, application availability, customer satisfaction metrics, and rewards 337

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE for above-target performance. But a main objective of SLAs is setting and managing user expectations. IMPROVING SERVICE LEVEL MANAGEMENT While the concept of SLM has gained widespread recognition, implementation has been slower, in part due to the complexity of the network environment. In addition, according to the 1999 INS survey findings on SLM, it is a continuing challenge. The good news is that 63 percent of respondents with SLM capabilities in place were satisfied with those capabilities in 1999 (according to the survey) — a dramatic improvement over the previous year. However, despite the high satisfaction with SLM, improving it was important to more than 90 percent of respondents. Furthermore, organizational issues presented the greatest challenge to improving SLM for half the respondents, and managerial issues were the top challenge for another third. Also, customer satisfaction was considered an important SLM metric by 81 percent of respondents. Finally, the top barriers to implementing or improving SLM were said to be organizational/process issues, other projects with higher priority, and the difficulty in measuring SLAs. Despite the fact that SLM and SLAs are moving in the right direction by focusing on applications and end-user response time, the SLA tool market is not yet mature. Instead, SLAs are ahead of the software that is monitoring them. Indeed, 47 percent of the network professionals surveyed by INS in 1999 said that difficulty in measuring SLAs was a significant barrier to implementing or improving SLM. Although SLA contracts have not been monitorable by software people until recently, that situation is changing. Vendors are starting to automate the monitoring process and trying to keep pace with the moving target of customers’ changing needs. Businesses should also realize that SLAs are a tool for more than defining service levels. Thus, SLAs should also be used to actively solicit the agreement of end users to service levels that meet their needs. Often, the providers and consumers of IT services misunderstand the trade-off between the cost of the delivered service and the business need/benefit. The SLA process can help set more realistic user expectations and can support higher budget requests when user expectations exceed IT’s current capabilities. Businesses can implement SLM for important goals such as improving mission-critical application availability and dependability, and reducing application response time as measured from the user’s point of view. In 338

Service Level Management Links IT to the Business general terms, SLM can also enhance IT organizational efficiency and costeffectiveness. To improve their SLM capabilities and meet these objectives, organizations can address the relevant organizational issues, providing processes and procedures that aim at consistent service delivery and associated user satisfaction. In addition, because application performance has become paramount, organizations can implement tools to monitor and measure the behavior of those mission-critical applications that depend on network availability and performance. TOWARD A BUSINESS-PROCESS FOCUS As IT continues to be a business driver, some analysts predict that SLM will move toward a focus on the business process, whereby organizations will abstract the state of the business processes that run their companies. In turn, the available data and its abstraction will consolidate into a dashboard reporting system. As organizations move toward a business dashboard, the data will be just a given. Because solution providers are rapidly becoming sophisticated in making data available, this is already happening today — and more rapidly than expected.

339

This page intentionally left blank

Chapter 28

Information Systems Audits: What’s in It for Executives? Vasant Raval Uma G. Gupta

Companies in which executives and top managers view the IS audit as a critical success factor often achieve significant benefits that include decreases in cost, increases in profits, more robust and useful systems, enhanced company image, and the ability to respond quickly to changing market needs and technology influences. Both of the following examples are real and occurred in companies in which one of the authors worked as a consultant. In both situations, IS auditors played a critical role in not only preventing significant monetary loss for the company, but also in enhancing the image of the company to its stakeholders: • Scenario 1. One fine morning, auditors from Software Publishers Association (SPA) knocked on the doors of one of your business units. They wanted to verify that every copy of every software package in your business unit was properly licensed. The unit had 1700 microcomputers on a local area network. Fortunately, information systems (IS) auditors had recently conducted an audit of software licenses in the business unit. This encouraged the IS auditors and business managers from the company to work closely with SPA auditors who reviewed the audit work and tested a sample of the microcomputers at the company’s facility. The SPA auditors commended the business unit for its exemplary records and outstanding monitoring of software licenses. The investigation was completed in a few hours and the company was given a clean bill in the software licensing audit. • Scenario 2. Early in 1995, the vice president of information systems of a Fortune 500 company visited with the director of audit services and rec0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

341

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE ommended that the company’s efforts to be compliant with Year 2000 (Y2K) should be audited. The vice president was sensitive to the fact that such audits, although expensive and time-consuming, do not have any immediate or significant monetary returns. After considerable discussion, it was agreed that an initial exploratory audit of the current status of the Y2K problem should be conducted. The audit was to outline and discuss the implications of the Y2K problem on the company’s profits and provide an initial estimate of the cost of conducting the audit. A few weeks later, IS auditors presented a report to the board of directors, which reviewed the findings and mandated IS managers and other managers throughout the company to invest resources where necessary to become Y2K compliant by December 1998.1 Given the critical role that IS auditors play in the financial success and stability of a company, IS audits should not be only under the purview of the information systems department. Instead, executives and other top managers should understand and support the roles and responsibilities of IS auditors and encourage their active participation at all levels of decision making. A nurturing and supportive environment for IS auditors can result in significant benefits for the entire organization. The purpose of this chapter is to present a broad overview of the IS audit function and its integral role in organizational decision making. The functions of the IS audit department are discussed and ways in which the IS audit can be used as a valuable executive decision-making tool are outlined. Recommendations for leveraging an IS audit report to increase organizational effectiveness are also outlined. WHAT IS AN IS AUDIT? Information systems audit (hereafter ISA) refers to a set of technical, managerial, and organizational services provided by a group of auditing experts in the area of information systems and technologies. IS auditors provide a wide range of consulting services on problems, issues, opportunities, and challenges in information systems and technologies. The goal of an IS audit may often vary from project to project or even from system to system. However, in general, the purpose of an IS audit is to maximize the leverage on the investments in information systems and technologies and ensure that systems are strategically aligned with the mission and overall goals of the organization. IS audits can be conducted in a number of areas, such as utilization of existing systems, investments, emerging technologies, computer security, help desks, electronic commerce, outsourcing, reengineering, and electronic data interchange (EDI). Other areas warranting an IS audit include database management, data warehousing, intranets, Web page design and mainte342

Information Systems Audits: What’s in It for Executives? Exhibit 1.

Categories of IS Audits

• Control environments audits. Provide guidelines for enterprisewide deployment of technology resources. Examples: business continuity (or disaster recovery) plans, PC software licensing, Internet access and control, LAN security and control. • General control audits. Review general and administrative controls for their adequacy and reliability. Examples: data center security, Internet security, end-user systems access and privileges, role and functions of steering committees. • Financial audits: — Review of automated controls designed as part of the systems, Examples: limit checks, compatibility checks, concurrency controls in databases. — Provide assistance for financial audits. Examples: use of generalized audit software packages and other computer-assisted audit tools to review transactions and their financial results. • Special projects. Projects initiated to satisfy one-time needs. Examples: feasibility study of outsourcing of projects, processes, or systems, risk analysis for proposed offshore software development initiatives. • Emerging technologies. Review and feasibility analysis of newer technologies for the business. Examples: electronic data interchange, Web technology, telecommuting, telephony, imaging, data warehousing, data mining.

nance, business intelligence systems, retention of IS personnel, migration from legacy systems to client/server environments, offshore software contracts, and developing strategic information systems plans. Given the dismal statistics on IS projects that are delivered within-budget and on-time, a number of companies are mandating audits of their information systems projects. Exhibit 1 identifies the different categories of IS audits. TRADITIONAL APPROACH VERSUS VALUE-ADDED APPROACH The traditional view of the IS audit function differs from the value-added view found in many progressive, forward-thinking organizations. In the traditional view, an IS audit is something that is “done to” a department, unit, or project. On the other hand, in a value-added approach an audit is viewed as something that is “done for” another department, unit, or project. This is not a simply play on words but is instead a philosophy that differentiates between environments that are controlling and nurturing; it exemplifies workplaces where people compete versus cooperate. In the traditional approach, the audit is viewed as a product, whereas in the value-added approach the audit is viewed as a service that enhances the overall quality and reliability of the end product or service that the company produces. In traditional environments, the auditor is viewed as an adversary, cop, and trouble-maker. On the other hand, in a value-added environment, the auditor is viewed as a consultant and a counselor. The IS auditor is viewed as one who applies his or her knowledge and expertise to leverage the max343

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE Exhibit 2.

Traditional Approach versus Value-Added Approach to Auditing

Traditional Approach

Value-Added Approach

Something done to a unit, department, or project.

Something done for enhancing the quality, efficiency, and effectiveness of a unit, department, or project.

Audit is a product that is periodically delivered to specific units or departments.

Audit is an ongoing service provided to improve the “quality of life” of the organization.

The auditor plays an adversarial role.

The auditor is a consultant whose goal is to leverage resource utilization.

The auditor is a “best cop.”

The auditor is a houseguest.

The primary objective of auditing is to find errors and loopholes.

The primary objective of auditing is to increase the efficiency, effectiveness, and productivity of the organization.

Auditing is an expense.

Auditing is an investment.

The contribution of an auditor is temporary.

An auditor is a life-long business partner.

imum return on investments in information systems and technologies. The auditor is not someone who is out looking for errors but is instead an individual or a group of individuals who look for ways and means to improve the overall efficiency, effectiveness, and productivity of the company. Unlike the traditional approach where the auditor is viewed as someone who is on assignment, the value-based approach views the auditor as a long-term business partner. See Exhibit 2 for a summary of the key differences between the traditional approach and the value-added approach. ROLE OF THE IS AUDITOR The role of an IS auditor is much more than simply auditing a project, unit, or department. An IS auditor plays a pervasive and critical role in leveraging resources to their maximum potential and also in minimizing the risks associated with certain decisions. An IS auditor, therefore, wears several hats to ensure that information systems and technologies are synergistically aligned with the overall goals and objectives of the organization. Some key roles that an IS auditor has are outlined and discussed below: Internal Consultants Good IS auditors have a sound understanding of the business and hence can serve as outstanding consultants on a wide variety of projects. They 344

Information Systems Audits: What’s in It for Executives? can offer creative and innovative solutions to problems and identify opportunities where the company can leverage its information systems to achieve a competitive edge in the marketplace. In other words, IS audits can help organizations ask critical and probing questions regarding IS investments. The consultant role includes a wide variety of issues, including cost savings, productivity, and risk minimization. IS audits can help firms realize cost savings and proactively manage risks that are frequently associated with information technologies. IS audits in many cases support the financial audit requirements in a firm. For example, one of the authors of this chapter audited the review of a large offshore project resulting in savings of $3.4 million to the company. The auditor interviewed over 35 technical and management staff from the business unit and from the offshore facility. Based on the recommendations of the IS auditor, the offshore software development process was reengineered. The reengineering resulted in a well-defined and structured set of functional requirements, rigorous software testing procedures, and enhanced cross-cultural communications. The implications of the IS audit were felt not only on the particular project but on all future offshore IS projects. Change Agents IS auditors should be viewed as powerful change agents within an organization. They have a sound knowledge of the business and this, combined with an acute sense of financial, accounting, and legal ramifications of various organizational decisions, makes them uniquely qualified to push for change within an organization. For example, a company was having a difficult time implementing security measures in its information systems department. Repeated efforts to enlist the support of company employees failed miserably. Finally, the company sought the help of its IS audit team to enforce security measures. IS auditors acted as change agents and educated employees about the consequences of failing to meet established security measures. Within three months the company had one of the tightest security ships in its industry. Experts Many IS auditors specialize in certain areas of the business such as IS planning, security, system integration, electronic commerce, etc. These auditors not only have a good understanding of the technical issues, but also business and legal issues that may influence key information systems and projects. Hence, while putting together a team for any IS project, it is worthwhile to consider including an IS auditor as a team member. 345

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE Advisors One of the key roles of an IS auditor is to serve as an advisor to the business manager on IS issues that have an enterprisewide effect. The advisory role often spans both technical and managerial issues. Examples of situations in which IS auditors could be used as advisors include software licensing management, establishing a standardization policy for hardware and software, evaluating key IS projects, and ensuring the quality of outsourcing contracts. IS auditors not only monitor the progress of the project but also provide timely advice if the project is going haywire. It is worthwhile to always include a member from the IS audit team on IS ventures that have significant implications for the organization. Advocates IS auditors can serve as outstanding advocates to promote the information system needs and functions of business units to top management. As neutral parties who have a stake in the success of the company, their views are often likely to get the attention of top management. IS auditors cannot only serve as advocates of the technology and personnel needs of the business unit, but also emphasize the strategic role of information systems in the success of both the business unit and the organization at large. IS auditors also play a critical role in ensuring the well-being of the organization. For example, IS auditors have often played a leading role in convincing top management of the importance of investing in computer security, without which the organization may simply cease to be in business. ROLE OF EXECUTIVES IN CAPITALIZING THE IS AUDIT FUNCTION Successful and pragmatic companies view the IS audit function as an integral and vital element in corporate decision making. Companies that view IS audit as an information systems function — or even worse, as merely an audit function — will fail to derive the powerful benefits that an IS audit can provide. This section discusses how companies can use the IS audit to achieve significant benefits for the entire organization. Be Proactive The IS audit should not be viewed as a static or passive function in an organization that is called to act on a “need-only” basis. Instead, the IS audit function should be managed proactively and should be made an integral part of all decision making in the organization. The auditor is an internal consultant whose primary goal is to provide the information and tools necessary to make sound decisions. The auditor’s role is not limited to one department or even to one project; instead, the goal of the auditor is to help each business unit make sound technology decisions so as to have a far-reaching and positive impact on the entire organization. However, this 346

Information Systems Audits: What’s in It for Executives? cannot be achieved unless companies are proactive in tapping into the skillset of its IS auditors. Increase Visibility of the IS Audit Executives who view the IS audit function as a necessary evil will be doing grave injustice to their organizations. Top management should take an active role in advocating the contribution of the IS audit team to the organization. Executives must play an active role in promoting the critical role and significant contributions of IS auditors. Publicizing projects and systems where an IS audit resulted in significant savings to the company or led to better systems is a good way to increase organizational understanding of IS audits. Many companies also mandate IS audits for all projects and systems that exceed a certain minimum dollar value, thus increasing the visibility and presence of IS auditors. Enhance the IS Auditor’s Image Encourage business units managers to view the IS audit not as a means to punish individuals or units, but as an opportunity to better utilize information systems and technologies to meet the overall goals of the organization. Include IS auditors in all key strategic committees and long-range planning efforts. Bring IS auditors early on into the development phase of a project so that project members view them as team players rather than “cops.” Provide Resources The IS audit, like other audit functions, requires hardware, software, and training resources. Companies that recognize the critical role of IS auditors support their resource needs and encourage their active participation. They recognize that a good and robust audit system can pay for itself many times over in a short span of time. Given the rapid changes in technology, auditors not only need hardware and software resources to help them stay on the leading edge, but should also be given basic training in the use of such technologies. Communicate, Communicate, Communicate Effective communication between business units and IS auditors is vital for a healthy relationship between the two groups. Business unit managers should know the specific role and purpose of an IS audit. They should have a clear understanding of who will review the auditors’ report and the actions that will be initiated based on that report. IS auditors, on the other hand, should be more open in their communication with business managers and communicate issues and concerns, both informally and formally. 347

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE They should always be good team players and understand that their role is to help and support the organization in achieving its full potential. CONCLUSION The IS audit is a critical function for any organization. What separates the successful organizations from the less successful ones is the ability to leverage the IS audit function as a vital element in organizational decision making. Companies in which executives and top managers view the IS audit as a critical success factor often achieve significant benefits, including decreases in costs, increases in profits, more robust and useful systems, enhanced company image, and ability to respond quickly to changing market needs and technology influences. Notes 1. Editor’s note: Although already old and linked to an issue that does not get much attention anymore, this example illustrates well the importance of understanding the effects of IS audits and their results on the business performance of an organization.

348

Chapter 29

Cost-Effective IS Security via Dynamic Prevention and Protection Christopher Klaus

This chapter presents a fresh perspective on how the IS security mechanism should be organized and accomplished. It discusses the unique characteristics of the cyberspace computing environment and how those characteristics affect IS security problems. It describes three approaches to resolving those problems and analyzes the effectiveness of each. THE CYBERSPACE ENVIRONMENT In cyberspace, one cannot see, touch, or detect a problem. Most organizations find it difficult to allocate funds to address problems that their executives cannot directly experience. Comprehensive prevention and protection is a complex, multilayer process that reaches across networks, servers, desktops, and applications. It takes a considerable expenditure in both software and services and in highly trained staff to properly protect online information assets from attack or misuse. And yet, security for online business operations is rarely a direct part of an organization’s core expertise. For these reasons, many organizations underinvest in security. The problem with underprotection is that security breaches have a direct effect on profitability, either through business interruption loss, merger and acquisition due diligence, legal and shareholder/stakeholder liability, regulatory compliance, or negative publicity. An information processing application or network looks the same (at least externally) from the time of an attacker’s initial reconnaissance through penetration and

0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

349

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE subsequent attack on the application or network. If risks associated with the network are not adequately addressed, economic and operational harm may occur before the damage is discovered and remedied. The specific risk facing any individual organization is defined by the combination of the threat to the information resource or the application that processes it, its vulnerability to compromise, the economic consequences of an assault, and the likelihood of a successful attack. During the past few years, a large number of commercial and government organizations have studied the challenges associated with reducing risk within such a complex environment. Within the physical domain, decision makers typically have minutes, hours, days, or even weeks to respond to potential or actual attacks by various types of intruders. This is not true in the world of cyberspace. Security decisions need to be made almost instantaneously, requiring highly automated sensors and management platforms capable of helping security administrators focus on the most important security events immediately, with an absolute minimum of false alarms or false-positives. At the same time, the security solution must not unduly affect normal online business operations. The Four Categories of Human Threats Four basic human threat categories exist in cyberspace: internal and external, structured and unstructured. Internal Threat: Unstructured. The unstructured internal threat is posed by the average information processing application user. Typically, this individual lacks awareness of existing technical computing vulnerabilities and is responsible for such things as device use errors and network crashes. These result from inadvertent misuse of computing resources and poor training. When these individuals exploit computing resources for illegal gain, they typically misuse authorized privileges or capitalize on obvious errors in file access controls. Internal Threat: Structured. The structured internal threat is posed by an authorized user who possesses advanced knowledge of network vulnerabilities. This person uses this awareness to work around the security provisions in a simplistically configured network. An aggressive, proactive IS security mechanism must be deployed to counter the threat that this person’s activities may pose. External Threat: Unstructured. The unstructured external threat created by the average World Wide Web user is usually not malicious. Typically, this individual lacks the skills and motivation to cause serious damage to a network. However, this person’s curiosity can lead to unintentional system crashes and the loss of data files. 350

Cost-Effective IS Security via Dynamic Prevention and Protection External Threat: Structured. The structured external threat stems from someone with detailed knowledge of network vulnerabilities. This person has access to both manual and automated attack tools that permit compromising most IS security programs, especially when the intruder does not perceive a risk of detection or apprehension. In particular, the development of hybrid threats, automated integrations of virus technologies and attack techniques has created a virulent new class of structured external threats designed specifically to elude traditional security infrastructure such as firewalls or anti-virus mechanisms.

The IS Security Issues of the Virtual Domain Within the virtual domain, the entire sequence that may be associated with a probe, intrusion, and compromise of a network, server, or desktop often can be measured in milliseconds. An attacker needs to locate only one exposed vulnerability. By contrast, the defenders of an application or network must address hundreds of potential vulnerabilities across thousands of devices. At the same time, these defenders must continue to support an array of revenue-generating or mission-enabling operations. The virtual domain is not efficiently supported by conventional manual audits, random monitoring of information processing application operations, and nonautomated decision analysis and response. It requires strategic insertion and placement of technical and procedural countermeasures, as well as rapid, automated responses to unacceptable threat and vulnerability conditions involving a wide array of attacks and misuse. READY-AIM-FIRE: THE WRONG APPROACH The primary challenges associated with bringing the network security domain under control result from its relative complexity, as well as the shortage of qualified professionals who understand how to operate and protect it. Some organizations have adequate and well-trained IS staff. The norm, however, is a small, highly motivated but outgunned team that focuses most of its energies on user account maintenance, daily emergencies, and general network design reviews. Few staff have time to study evolving threat, vulnerability, and safeguard (countermeasure) data, let alone develop policies and implementation plans. Even fewer have time to monitor network activity for signs of application or network intrusion or misuse. This situation results in a “ready-aim-fire” response to IS security vulnerabilities, achieving little more than to create a drain on the organization. This is the typical sequence of events: 1. IS executives fail to see the network in the context of the actual risk conditions to which it is exposed. These individuals understand 351

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE

Exhibit 1. The Ad-Hoc Approach to Safeguard Selection Does Not Work

the basic technology differences between such operating systems as Windows NT/2000/XP and Sun Solaris. They also understand how products such as Oracle, Sybase, Internet Explorer, Microsoft Word, PowerPoint, and Excel enhance operations. However, these individuals typically have little knowledge about the vulnerabilities associated with the use of such products and can allow threats to enter, steal, destroy, or modify the enterprise’s most sensitive data. 2. IS safeguards are implemented in an ad hoc manner due to this incomplete understanding of the problem (Exhibit 1). There is no real program to map security exposures to IS operational requirements, no study of their effects on either threats or vulnerabilities, and no analysis of the return on security investment. That is: SECURITY = DIRECT TECHNICAL COUNTERMEASURES

(The latter include such things as firewalls, data encryption, and security patches.) 3. These organizations are left with a false sense of security (Exhibit 2). They believe that the risk has been addressed, when in fact many threats and vulnerabilities remain. 4. As a result, risk conditions continue to degrade as users alter system and safeguard configurations and work around the safeguards. 352

Cost-Effective IS Security via Dynamic Prevention and Protection

:H'RQ W.QRZ:KDW V$GGUHVVHGDQG:KDW V1RW

2QO\3DUWLDO 5HGXFWLRQ 0D\EH 7KUHDW 5HGXFWLRQ

)LUHZDOO 5RXWHU &RQILJXUDWLRQ

&RPPV

0RGHP $FFHVV 7KH:RUOG

5RXWHU

7KH:RUOG 2QO\3DUWLDO5HGXFWLRQ 0D\EH 9XOQHUDELOLW\5HGXFWLRQ ([WHUQDO 8QVWUXFWXUHG

Exhibit 2.

([WHUQDO 6WUXFWXUHG

,QWHUQDO 8QVWUXFWXUHG

,QWHUQDO 6WUXFWXUHG

What the Network Really Looks Like

LOOKING FOR MANAGEMENT COMMITMENT The approach just described is obviously not the answer. As noted in Exhibit 3, online vulnerability conditions are complex; encompass many networks, servers, and desktops; and require more than token attention. Success within the virtual domain will depend on the acceptance and adoption of sound processes that support a sequential and adaptive IS security model. However, an attempt to obtain the commitment of the organization’s senior executives to an investment in new IS security may be rejected. The key to obtaining support from senior executives is a clear presentation of how the organization will receive a return on its investment. A GOOD START IS WITH WHAT IS UNDERSTOOD The best place to start developing a new IS security solution is with what is already understood and can be applied directly to the new problem domain. In this case, one starts with the following steps: 1. 2. 3. 4. 5.

Define sound security processes. Create meaningful and enforceable policies. Implement organizational safeguards. Establish appropriate program metrics. Conduct frequent IS security program audits, which evaluate variance between specific organizational IS security policies and their actual implementation. 353

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE /D\HU &RPPXQLFDWLRQV DQG6HUYLFHV 7&3,3 ,3; ; (WKHUQHW )'', 5RXWHU&RQILJXUDWLRQV +XEV6ZLWFKHV

0RGHP $FFHVV

7KH :RUOG

&RPPV 5RXWHU 7KH :RUOG /D\HU

/D\HU

2SHUDWLQJ6\VWHPV

$SSOLFDWLRQV

81,; :LQGRZV17 :LQGRZV :LQGRZV;3 0DF26; 1RYHOO 096 '26 26 906

Exhibit 3.

'DWDEDVHV :HE6HUYHU ,QWHUQHW%URZVHU 0DLQWHQDQFH 2IILFH$XWRPDWLRQ

Vulnerabilities Are Located throughout the Network Architecture

Without established process and rigor, successful, meaningful reduction of network risk is highly unlikely. This situation also ensures that there will be a major variance between the actual IS security program implementation and the organization’s IS security policy. DIRECT RISK MITIGATION Without an understanding of the total risk to their networks, many organizations move quickly to implement conventional baseline IS security solutions such as: • Identification and authentication (I&A) • Data encryption • Access control This approach is known as direct risk mitigation. Organizations that implement this approach will experience some reduction in risks. However, these same organizations will tend to leave significant other risks unaddressed. The network security domain is too complex for such an ad hoc approach to be effective. 354

Cost-Effective IS Security via Dynamic Prevention and Protection

5LVN0DQDJHPHQW0RGHO 5HILQHDV )RUP%DVLVIRU6HFXULW\ 3HUIRUP)UHTXHQW 1HFHVVDU\ 3URJUDPDQG5HTXLUHPHQWV 5,6. $66(660(176 3ODQQLQJ

3HUIRUP)UHTXHQW 5,6.326785( $66(660(176 0HDVXUH(IIHFWLYHQHVV 5HYLHZV $XGLWV

,PSOHPHQW (QIRUFH 9DOXH$GGHG 6DIHJXDUGV

3URFHGXUHV 7HFKQLFDO&RXQWHUPHDVXUHV

6HFXULW\3URJUDP ,PSOHPHQWDWLRQ 0DQDJHPHQW

Exhibit 4. Implementation of Sound Risk Management Process Will Ensure Reduced Risk

Incorporating risk analysis, policy development, and traditional audits into the virtual domain will provide the initial structure required to address many of these issues. At a minimum, the IS security program must consist of well-trained personnel who: • Adhere to sound, standardized processes • Implement valid procedural and technical solutions • Provide for audits intended to support potential attack or information application misuse analysis This approach is captured by the formula: SECURITY = RISK ANALYSIS + POLICY + DIRECT TECHNICAL COUNTERMEASURES + AUDIT

If implemented properly, direct risk mitigation provides 40 to 60 percent of the overall IS security solution (Exhibit 4). This model begins, as should all security programs, with risk assessment. The results support computing operations and essential enterprise planning efforts. Without proper risk analysis processes, the IS security policy and program lacks focus and traceability (Exhibit 5). Once a risk assessment has been conducted, the individuals responsible for implementation will acquire, configure, and operate the defined network solution. Until now, little has been done to ensure that clear technical IS security policies are provided to these personnel. The lack of guidance and rationale has resulted in the acquisition of non-value-added technical safeguards and the improper and insecure configuration of the associated 355

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE

3ROLF\$GGUHVVHV6SHFLILF7KUHDWV 9XOQHUDELOLWLHV 52,%DVHG 'LVDEOH 0DLQWHQDQFH %DFNGRRU

&KDQJH 'HIDXOW 3DVVZRUG

&RPPV

&RQILJXUH $SSO\$OO $V)LOWHU 263DWFKHV

,QVWDOO,GHQWLILFDWLRQ $XWKHQWLFDWLRQ

0RQLWRULQJ %DQQHU3DJH

Exhibit 5.

7KH:RUOG

5RXWHU

(QFU\SW 'DWD)LOHV

7KH:RUOG

([WHUQDO 8QVWUXFWXUHG

0RGHP $FFHVV

8VHU6HFXULW\ 7UDLQLQJ

([WHUQDO 6WUXFWXUHG

,QWHUQDO 8QVWUXFWXUHG

$XGLW 5HYLHZ

,QWHUQDO 6WUXFWXUHG

Ensuring a Sound Security Policy

applications once these mechanisms have arrived within the operational environment. One other major problem typically occurs within the implementation phase. Over time, administrators and users alter system configurations. These alterations re-open many of the vulnerabilities associated with the network’s communications services, operating systems, and applications. This degradation has driven the requirement represented within the final phase of the risk management cycle. Risk posture assessments (audits) are linked to the results of the risk assessment. Specifically, risk posture assessments determine the organizational IS security policy compliance levels, particularly as they define the variance from the policy. The results of such assessments highlight program weaknesses and support the continuous process of measuring compliance of the IS security policy against actual security practice. Organizations can then facilitate a continuous improvement process to reach their goals. Risk Posture Assessment Results The results of a risk posture assessment can be provided in a number of individual formats. Generally, assessment results may be provided to: 356

Cost-Effective IS Security via Dynamic Prevention and Protection • Technicians and engineers in a format that supports corrective action • Security and network managers in a format that supports program analysis and improvement • Operations executives in a format that summarizes the overall effectiveness of the IS security program and its value to the organization This approach is sound, responsive, and simple to implement. However, major problems still exist, and this approach addresses only 40 to 60 percent of the solution. Attackers do not care about this 40 to 60 percent — they only care about the remaining 40 to 60 percent that has been left exposed. Any success associated with this type of process depends on proper initial system and countermeasure implementation and a fairly static threat and vulnerability environment. This is not the case in most organizations. Normally, the IS security exposures not addressed by this approach include: • An active, highly knowledgeable, and evolving threat • A greatly reduced network security decision and response cycle • Network administrators and users who misconfigure or deliberately work around the IS security countermeasures • Low levels of user and administrator awareness of the organization’s IS security policies and procedures — and the threats and vulnerabilities those policies are designed to detect and resolve • Highly dynamic vulnerability conditions The general classes of vulnerabilities involve: • Design inadequacies in hardware and software • Implementation flaws, such as insecure file transfer mechanisms • Administration deficiencies Although direct risk mitigation is a good start to enhancing IS security, serious threats and vulnerability conditions can still leave the network highly susceptible to attack and misuse. The next level of response is described as dynamic prevention and protection. DYNAMIC PREVENTION AND PROTECTION The world of cyberspace requires an adaptive, highly responsive process and product set to ensure ongoing, consistent risk reduction. This solution is dynamic prevention and protection, which is discussed further in this chapter. It is captured in the formula: SECURITY = RISK ANALYSIS + POLICY + IMPLEMENTATION + THREAT AND VULNERABILITY MONITORING + ACTIVE BLOCKING OF IMMEDIATE THREATS + LONGER-TERM RESPONSE TO NONCRITICAL THREATS AND VULNERABILITIES 357

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE The dynamic protection model consists of a proactive cyclic risk management approach that includes active network and systems monitoring, detection, and response. A comprehensive security management mechanism becomes a natural outgrowth of the overall IS environment and provides overlapping, yet complementary, network, server, and desktop management ser vices. These performance and security management mechanisms are required to support an organization’s overall operational requirements. The network security management application supports the unique variables associated with the network security domain. Its architectural components address and support the following variables. • Attack analysis and response. Attack analysis and response is the realtime monitoring of attack recognition signatures, network protocol configurations, and other suspicious activities, including viruses, probing activity, and unauthorized modification of system access control mechanisms. Real-time monitoring provides the ability to rapidly detect unauthorized activity and respond with a variety of counterthreat techniques. The responses can range from simple IS security officer notification to proactive blocking of suspect users or behaviors, or automated reconfiguration of identified weaknesses or communications paths. • Misuse analysis and response. Misuse analysis and response is the realtime monitoring of the internal misuse of online resources. Typically, misuse is associated with activities that do not impact operational computing effectiveness, but are counter to documented policy regarding the acceptable use of organizational systems and resources. Automated response actions include denial of access, sending warning messages to the offending individuals, and the dispatch of e-mail messages to appropriate managers. • Vulnerability analysis and response. Vulnerability analysis and response consists of frequent, automated scanning of network components to identify unacceptable security-related vulnerability conditions — including automatic vulnerability assessment and reconfiguration for other potentially suspect devices once an active attack has been identified. This unacceptability is determined by a failure to conform to the organization’s IS security policy. The scanning includes automated detection of relevant design and administration vulnerabilities. Detection of the vulnerabilities leads to a number of user-defined responses, including automatic correction of the exposure, the dispatch of automated e-mail corrective actions, and the issuance of warning notices. 358

Cost-Effective IS Security via Dynamic Prevention and Protection • Configuration analysis and response. Configuration analysis and response includes frequent, automated scanning of performance-oriented configuration variables. • Risk posture analysis and response. Risk posture analysis and response includes automated evaluation of threat activity and vulnerability conditions. This activity goes beyond basic, hard-coded detection and response capabilities. It requires and bases its response on the analysis of a number of variables such as asset value, threat profile, and vulnerability conditions. Analysis supports real-time technical modifications and countermeasures in response to dynamic risk conditions. These countermeasures may include denial of access, active blocking of suspect users or behaviors, placement of conventional decoy files, and mazing — setting up decoy files and directory structures to lock an intruder into a maze of worthless directories to track his activities and form a basis for possible prosecution. • Audit and trends analysis. Audit and trends analysis includes the automated evaluation of threat, vulnerability, response, and awareness trends. The output of such an examination includes historical trends data associated with the IS security program’s four primary metrics: (1) risk, (2) risk posture, (3) response, and (4) awareness. This data supports both program planning and resource allocation decisions and automated assessments and reconfigurations based on clearly identified risk variables. • Real-time user awareness support. Real-time user awareness support provides recurring IS security policy, risk, and configuration training. This component ensures that users are aware of key organizational IS security policies, risk conditions, and violations of the policies. • Continuous requirement support. The dynamic prevention and protection model and its related technology components support organizational requirements to continuously ensure that countermeasures are installed and properly configured. Threats are monitored and responded to in a highly effective and timely manner, and vulnerability conditions are analyzed and corrected prior to exploitation. The model also supports the minimization of system misuse and increases general user and administrator IS security awareness. With the inclusion of the model and its supporting technologies, the entire spectrum of network security is addressed and measured. Although reaching the zero percent risk level is impossible in the real world of computing and telecommunication, incorporating dynamic prevention and protection security processes and mechanisms into the overall IS security effort supports reaching and maintaining a realistic solution — that is, the best solution for any one specific organization in terms of risk management and best value for each dollar of security investment. In addition to appro359

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE priately and consistently addressing these unique network security variables, these technology modules support the requirement for defining, collecting, analyzing, and improving the IS security program’s operational metrics.

360

Chapter 30

Reengineering the Business Continuity Planning Process Carl B. Jackson

CONTINUITY PLANNING MEASUREMENTS There is a continuing indication of a disconnect between executive management’s perceptions of continuity planning (CP) objectives and the manner in which they measure its value. Traditionally, CP effectiveness was measured in terms of a pass/fail grade on a mainframe recovery test, or on the perceived benefits of backup/recovery sites and redundant telecommunications weighed against the expense for these capabilities. The trouble with these types of metrics is that they only measure CP direct costs, or indirect perceptions as to whether a test was effectively executed. These metrics do not indicate whether a test validates the appropriate infrastructure elements or even whether it is thorough enough to test a component until it fails, thereby extending the reach and usefulness of the test scenario. Thus, one might inquire as to the correct measures to use. While financial measurements do constitute one measure of the CP process, others measure the CPs contribution to the organization in terms of quality and effectiveness, which are not strictly weighed in monetary terms. The contributions that a well-run CP process can make to an organization include: • • • • •

Sustaining growth and innovation Enhancing customer satisfaction Providing people needs Improving overall mission-critical process quality Providing for practical financial metrics

0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

361

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE A RECEIPT FOR RADICAL CHANGE: CP PROCESS IMPROVEMENT Just prior to the new millennium, experts in organizational management efficiency began introducing performance process improvement disciplines. These process improvement disciplines have been slowly adopted across many industries and companies for improvement of general manufacturing and administrative business processes. The basis of these and other improvement efforts was the concept that an organization’s processes (Process; see Exhibit 1) constituted the organization’s fundamental lifeblood and, if made more effective and more efficient, could dramatically decrease errors and increase organizational productivity. An organization’s processes are a series of successive activities, and when they are executed in the aggregate, they constitute the foundation of the organization’s mission. These processes are intertwined throughout the organization’s infrastructure (individual business units, divisions, plants, etc.) and are tied to the organization’s supporting structures (data processing, communications networks, physical facilities, people, etc.). A key concept of the process improvement and reengineering movement revolves around identification of process enablers and barriers (see Exhibit 1). These enablers and barriers take many forms (people, technology, facilities, etc.) and must be understood and taken into consideration when introducing radical change into the organization. The preceding narration provides the backdrop for the idea of focusing on continuity planning not as a project, but as a continuous process that must be designed to support the other mission-critical processes of the organization. Therefore, the idea was born of adopting a continuous process approach to CP, along with understanding and addressing the people, technology, facility, etc. enablers and barriers. This constitutes a significant or even radical change in thinking from the manner in which recovery planning has been traditionally viewed and executed. Radical Changes Mandated High awareness of management and low CP execution effectiveness, coupled with the lack of consistent and meaningful CP measurements, call for radical changes in the manner in which one executes recovery planning responsibilities. The techniques used to develop mainframe-oriented disaster recovery (DR) plans of the 1980s and 1990s consisted of five to seven distinct stages, depending on whose methodology was being used, that required the recovery planner to: 1. Establish a project team and a supporting infrastructure to develop the plans. 362

Reengineering the Business Continuity Planning Process Exhibit 1.

Definitions

Activities: Activities are things that go on within a process or sub-process. They are usually performed by units of one (one person or one department). An activity is usually documented in an instruction. The instruction should document the tasks that make up the activity. Benchmarking: Benchmarking is a systematic way to identity, understand, and creatively evolve superior products, services, designs, equipment, processes, and practices to improve the organization’s real performance by studying how other organizations are performing the same or similar operations. Business process improvement: Business process improvement (BPI) is a methodology that is designed to bring about self-function improvements in administrative and support processes using approaches such as FAST, process benchmarking, process redesign, and process reengineering. Comparative analysis: Comparative analysis (CA) is the act of comparing a set of measurements to another set of measurements for similar items. Enabler: An enabler is a technical or organizational facility/resource that make it possible to perform a task, activity, or process. Examples of technical enablers are personal computers, copying equipment, decentralized data processing, voice response, etc. Examples of organizational enablers are enhancement, self-management, communications, education, etc. Fast analysis solution technique: FAST is a breakthrough approach that focuses a group’s attention on a single process for a one- or two-day meeting to define how the group can improve the process over the next 90 days. Before the end of the meeting, management approves or rejects the proposed improvements. Future state solution: A combination of corrective actions and changes that can be applied to the item (process) under study to increase its value to its stakeholders. Information: Information is data that has been analyzed, shared, and understood. Major processes: A major process is a process that usually involves more than one function within the organization structure, and its operation has a significant impact on the way the organization functions. When a major process is too complex to be flowcharted at the activity level, it is often divided into sub-processes. Organization: An organization is any group, company, corporation, division, department, plant, or sales office. Process: A process is a logical, related, sequential (connected) set of activities that takes an input from a supplier, adds value to it, and produces an output to a customer. Sub-process: A sub-process is a portion of a major process that accomplishes a specific objective in support of the major process. System: A system is an assembly of components (hardware, software, procedures, human functions, and other resources) united by some form of regulated interaction to form an organized whole. It is a group of related processes that may or may not be connected. Tasks: Tasks are individual elements or subsets of an activity. Normally, tasks relate to how an item performs a specific assignment. From Harrington, H.J., Esseling, E.K.C., and Van Nimwegen, H., Business Process Improvement Workbook, McGraw-Hill, 1997, 1–20.

363

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE 2. Conduct a threat or risk management review to identify likely threat scenarios to be addressed in the recovery plans. 3. Conduct a business impact analysis (BIA) to identify and prioritize time-critical business applications/networks and determine maximum tolerable downtimes. 4. Select an appropriate recovery alternative that effectively addresses the recovery priorities and timeframes mandated by the BIA. 5. Document and implement the recovery plans. 6. Establish and adopt an ongoing testing and maintenance strategy. Shortcomings of the Traditional Disaster Recovery Planning Approach The old approach worked well when disaster recovery of “glass-house” mainframe infrastructures was the norm. It even worked fairly well when it came to integrating the evolving distributed/client/server systems into the overall recovery planning infrastructure. However, when organizations became concerned with business unit recovery planning, the traditional DR methodology was ineffective in designing and implementing business unit/function recovery plans. Of primary concern when attempting to implement enterprisewide recovery plans was the issue of functional interdependencies. Recovery planners became obsessed with identification of interdependencies between business units and functions, as well as the interdependencies between business units and the technological services supporting time-critical functions within these business units. Losing Track of the Interdependencies The ability to keep track of departmental interdependencies for CP purposes was extremely difficult and most methods for accomplishing this were ineffective. Numerous circumstances made consistent tracking of interdependencies difficult to achieve. Circumstances affecting interdependencies revolve around the rapid rates of change that most modern organizations are undergoing. These include reorganization/restructuring, personnel relocation, changes in the competitive environment, and outsourcing. Every time an organizational structure changes, the CPs must change and the interdependencies must be reassessed. The more rapid the change, the more daunting the CP reshuffling. Because many functional interdependencies could not be tracked, CP integrity was lost and the overall functionality of the CP was impaired. There seemed to be no easy answers to this dilemma. Interdependencies Are Business Processes Why are interdependencies of concern? And what, typically, are the interdependencies? The answer is that, to a large degree, these interdependencies are the business processes of the organization and they are of concern 364

Reengineering the Business Continuity Planning Process because they must function in order to fulfill the organization’s mission. Approaching recovery planning challenges with a business process viewpoint can, to a large extent, mitigate the problems associated with losing interdependencies, and also ensure that the focus of recovery planning efforts is on the most crucial components of the organization. Understanding how the organization’s time-critical business processes are structured will assist the recovery planner in mapping the processes back to the business units/departments; supporting technological systems, networks, facilities, vital records, people, etc.; and keeping track of the processes during reorganizations or during times of change. THE PROCESS APPROACH TO CONTINUITY PLANNING Traditional approaches to mainframe-focused disaster recovery planning emphasized the need to recover the organization’s technological and communications platforms. Today, many companies have shifted away from technology recovery and toward continuity of prioritized business processes and the development of specific business process recovery plans. Many large corporations use the process reengineering/improvement disciplines to increase overall organizational productivity. CP itself should also be viewed as such a process. Exhibit 2 provides a graphical representation of how the enterprisewide CP process framework should look. This approach to continuity planning consolidates three traditional continuity planning disciplines, as follows: 1. IT disaster recovery planning (DRP). Traditional IT DRP addresses the continuity planning needs of the organizations’ IT infrastructures, including centralized and decentralized IT capabilities and includes both voice and data communications network support services. 2. Business operations resumption planning (BRP). Traditional BRP addresses the continuity of an organization’s business operations (e.g., accounting, purchasing, etc.) should they lose access to their supporting resources (e.g., IT, communications network, facilities, external agent relationships, etc.). 3. Crisis management planning (CMP). CMP focuses on assisting the client organization develop an effective and efficient enterprisewide emergency/disaster response capability. This response capability includes forming appropriate management teams and training their members in reacting to serious company emergency situations (e.g., hurricane, earthquake, flood, fire, serious hacker or virus damage, etc.). CMP also encompasses response to life-safety issues for personnel during a crisis or response to disaster. 4. Continuous availability (CA). In contrast to the other CP components as explained above, the recovery time objective (RTO) for recovery of infrastructure support resources in a 247 environment 365

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE Enterprisewide Availability Infrastructure and Approach

Business Process/ Function/Unit Recovery Planning and Execution Teams - Time-Critical Processing - Resource Requirements - Plan Development - Plan Exercise - Quality Assurance - Change Management

- Business Process Focused - Risk Management/Analysis/BIA - Continuity and Recovery Strategy - E-Business Uptime Requirements - Benchmarking/Peer Analysis

Crisis Management Planning (CM) Business Resumption Planning (BRP)

Disaster Recovery Planning (DRP)

Continuous Availability Continuous Availability - Continuous Operations - Disaster Avoidance - E-Technologies Redundancy and Diversity - Known Failover and Recovery Timeframes

Global Enterprise Emergency and Recovery Response Team(s) - Emergency Response - Command Center Planning - Awareness Training - Communications Coordination

Technology Infrastructure Recovery Planning and Execution Teams - Strategy Implementation Assistance - Plan Development - Plan Exercise - Quality Assurance - Change Management

Exhibit 2. The Enterprisewide CP Process Framework

has diminished to zero time. That is, the client organization cannot afford to lose operational capabilities for even a very short period of time without significant financial (revenue loss, extra expense) or operational (customer service, loss of confidence) impact. The CA service focuses on maintaining the highest uptime of support infrastructures to 99 percent and higher. MOVING TO A CP PROCESS IMPROVEMENT ENVIRONMENT Route Map Profile and High-Level CP Process Approach A practical, high-level approach to CP process improvement is demonstrated by breaking down the CP process into individual sub-process components as shown in Exhibit 3. The six major components of the continuity planning business process are described below. Current State Assessment/Ongoing Assessment. U n d e r s t a n d i n g t h e approach to enterprisewide continuity planning as illustrated in Exhibit 3, one can measure the “health” of the continuity planning 366

Reengineering the Business Continuity Planning Process

Establish Infrastructure

Do Dev cu elo me pm nta en tio t a n S nd up po rt

Implementation

Operations

Implem en Requir tation of ed Infrastr BCP ucture

Resou rc Criticali e ty

Ma Com xim mit u m D o m T ent w n ole to tim rab le e

Develop Strategy n of Definitiocture u Infrastr ments e Requir

Risk Mitigation Initiatives

als Go s ve ive uti ct ec bje Ex d O an

Executive

Implementation in Business Unit

Process Risk and Impact Baselining

Related Process Strategy

Current State Assessment

cts s Impa Proces nt Threats rre and Cu

Proce Informa ss tion

Business Units

Continuous Improvement Assistance Plan Ownership

Information Security Vital Records Crisis Management Information Technology Physical Facilities Executive Protection

Business Unit Information Security Vital Records Crisis Management Information Technology Physical Facilities Executive Protection Audit

Technology Owners Business Unit Owners Human Resources Recovery Vendors

Organizational Change

Exhibit 3.

A Practical, High-Level Approach to the CP Process Improvement

process. During this process, existing continuity planning business sub-processes are assessed to gauge their overall effectiveness. It is sometimes useful to employ gap analysis techniques to understand current state, desired future state, and then understand the people, process, and technology barriers and enablers that stand between the current state and the future state. An approach to co-development of current state/future state visioning sessions is illustrated in Exhibit 4. The current state assessment process also involves identifying and determining how the organization “values” the CP process and measures its success (often overlooked and often leading to the failure of the CP process). Also during this process, an organization’s business processes are examined to determine the impact of loss or interruption of service on the overall business through performance of a business impact analysis (BIA). The goal of the BIA is to prioritize business processes and assign 367

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE

Current5.__________ State

1. Define

1.__________ 2.__________ 3.__________ 4.__________

6.__________ 7.__________ 8.__________

Key Performance Indicators Key Future State ContinuityRelated Initiatives Critical Success Factors How do we measure success?

3. Document Analyze Design

GAP

Potential Risks/Barriers/Rewards What are our people-, process-, technology-, and mission-related risks/barriers/rewards?

Future 5.__________ State

2. Vision

Exhibit 4.

1.__________ 2.__________ 3.__________ 4.__________

6.__________ 7.__________ 8.__________

Current State/Future State Visioning Overview

the recovery time objective (RTO) for their recovery, as well as for the recovery of their support resources. An important outcome of this activity is the mapping of time-critical processes to their support resources (e.g., IT applications, networks, facilities, communities of interest, etc.). Process Risk and Impact Baseline. During this process, potential risks and vulnerabilities are assessed, and strategies and programs are developed to mitigate or eliminate those risks. The stand-alone risk management review (RMR) commonly looks at the security of physical, environmental, and information capabilities of the organization. In general, the RMR should identify or discuss the following areas:

• • • • • • • •

potential threats physical and environmental security information security recoverability of time-critical support functions single-points-of-failure problem and change management business interruption and extra expense insurance an offsite storage program, etc.

Strategy Development. This process involves facilitating a workshop or series of workshops designed to identify and document the most appropriate recovery alternative to CP challenges (e.g., determining if a hotsite is needed for IT continuity purposes, determining if additional communications circuits should be installed in a networking environment, determining if additional workspace is needed in a business operations environment, etc.). Using the information derived from the risk assessments 368

Reengineering the Business Continuity Planning Process above, design long-term testing, maintenance, awareness, training, and measurement strategies. Continuity Plan Infrastructure. During plan development, all policies, guidelines, continuity measures, and continuity plans are formally documented. Structure the CP environment to identify plan owners and project management teams, and to ensure the successful development of the plan. In addition, tie the continuity plans to the overall IT continuity plan and crisis management infrastructure. Implementation. During this phase, the initial versions of the continuity or crisis management plans are implemented across the enterprise environment. Also during this phase, long-term testing, maintenance, awareness, training, and measurement strategies are implemented. Operations. This phase involves the constant review and maintenance of the continuity and crisis management plans. In addition, this phase may entail maintenance of the ongoing viability of the overall continuity and crisis management business processes.

HOW DOES ONE GET THERE? THE CONCEPT OF THE CP VALUE JOURNEY The CP value journey is a helpful mechanism for co-development of CP expectations by the organization’s top management group and those responsible for recovery planning. To achieve a successful and measurable recovery planning process, the following checkpoints along the CP value journey should be considered and agreed upon. The checkpoints include: • Defining success. Define what a successful CP implementation will look like. What is the future state? • Aligning the CP with business strategy. Challenge objectives to ensure that the CP effort has a business-centric focus. • Charting an improvement strategy. Benchmark where the organization and the organization’s peers are, the organization’s goals based on their present position as compared to their peers, and which critical initiatives will help the organization achieve its goals. • Becoming an accelerator. Accelerate the implementation of the organization’s CP strategies and processes. In today’s environment, speed is a critical success factor for most companies. • Creating a winning team. Build an internal/external team that can help lead the company through CP assessment, development, and implementation. • Assessing business needs. Assess time-critical business process dependence on the supporting infrastructure. 369

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE • Documenting the plans. Develop continuity plans that focus on ensuring that time-critical business processes will be available. • Enabling the people. Implement mechanisms that help enable rapid reaction and recovery in times of emergency, such as training programs, a clear organizational structure, and a detailed leadership and management plan. • Completing the organization’s CP strategy. Position the organization to complete the operational- and personnel-related milestones necessary to ensure success. • Delivering value. Focus on achieving the organization’s goals while simultaneously envisioning the future and considering organizational change. • Renewing/recreating. Challenge the new CP process structure and organizational management to continue to adapt and meet the challenges of demonstrate availability and recoverability. This value journey technique for raising the awareness level of management helps both facilitate meaningful discussions about the CP process and ensure that the resulting CP strategies truly add value. As discussed later, this value-added concept will also provide additional metrics by which the success of the overall CP process can be measured. In addition to the approaches of CP process improvement and the CP value journey mentioned above, the need to introduce people-oriented organizational change management (OCM) concepts is an important component in implementing a successful CP process. HOW IS SUCCESS MEASURED? BALANCED SCORECARD CONCEPT1 A complement to the CP process improvement approach is the establishment of meaningful measures or metrics that the organization can use to weigh the success of the overall CP process. Traditional measures include: • How much money is spent on hotsites? • How many people are devoted to CP activities? • Was the hotsite test a success? Instead, the focus should be on measuring the CP process contribution to achieving the overall goals of the organization. This focus helps to: • • • •

Identify agreed-upon CP development milestones Establish a baseline for execution Validate CP process delivery Establish a foundation for management satisfaction to successfully manage expectations

The CP balanced scorecard includes a definition of the: 370

Reengineering the Business Continuity Planning Process Definition of "Future" State

Vision Strategy/Goals

How will Your Company Differ?

Growth and Innovation

Customer Satisfaction

What are the Critical Success Factors? What are the Critical Measures?

Exhibit 5.

• • • • •

People

Process Quality

Financial

Critical Success Factors (CSFs)

Balanced Scorecard Measurements

Balanced Scorecard Concept

Value statement Value proposition Metrics/assumptions on reduction of CP risk Implementation protocols Validation methods

Exhibit 5 and Exhibit 6 illustrate the balanced scorecard concept and show examples of the types of metrics that can be developed to measure the success of the implemented CP process. Included in this balanced scorecard approach are the new metrics upon which the CP process will be measured. Following this balanced scorecard approach, the organization should define what the future state of the CP process should look like (see the preceding CP value journey discussion). This future state definition should be co-developed by the organization’s top management and those responsible for development of the CP process infrastructure. Exhibit 4 illustrates the current state/future state visioning overview, a technique that can also be used for developing expectations for the balanced scorecard. Once the future state is defined, the CP process development group can outline the CP process implementation critical success factors in the areas of: • • • • •

Growth and innovation Customer satisfaction People Process quality Financial state

These measures must be uniquely developed based on the specific organization’s culture and environment. 371

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE Exhibit 6.

Continuity Process Scorecard

Question: How should the organization benefit from implementation of the following continutity process components in terms of people, processes, technologies, and mission/profits? Continuity Planning Process Components

People Processes Technologies Mission/Profits

Process methodology Documented DRPs Documented BRPs Documented crisis management plans Documented emergency response procedures Documented network recovery plan Contingency organization walkthroughs Employee awareness program Recovery alternative costs Continuous availability infrastructure Ongoing testing programs etc.

WHAT ABOUT CONTINUITY PLANNING FOR WEB-BASED APPLICATIONS? Evolving with the birth of the Web and Web-based businesses is the requirement for 247 uptime. Traditional recovery time objectives have disappeared for certain business processes and support resources that support the organizations’ Web-based infrastructure. Unfortunately, simply preparing Webbased applications for sustained 247 uptime is not the only answer. There is no question that application availability issues must be addressed, but it is also important that the reliability and availability of other Web-based infrastructure components (such as computer hardware, Web-based networks, database file systems, Web servers, file and print servers, as well as preparing for the physical, environmental, and information security concerns relative to each of these [see RMR above]) also be undertaken. The terminology for preparing the entirety of this infrastructure to remain available through major and minor disruptions is usually referred to as continuous or high availability.

372

Reengineering the Business Continuity Planning Process Continuous availability (CA) is not simply bought; it is planned for and implemented in phases. The key to a reliable and available Web-based infrastructure is to ensure that each of the components of the infrastructure have a high degree of resiliency and robustness. To substantiate this statement, Gartner Research reports “Replication of databases, hardware servers, Web servers, application servers, and integration brokers/suites helps increase availability of the application services. The best results, however, are achieved when, in addition to the reliance on the system’s infrastructure, the design of the application itself incorporates considerations for continuous availability. Users looking to achieve continuous availability for their Web applications should not rely on any one tool but should include the availability considerations systematically at every step of their application projects.”2 Implementing a continuous availability methodological approach is the key to an organized and methodical way to achieve 247 or near 247 availability. Begin this process by understanding business process needs and expectations, and the vulnerabilities and risks of the network infrastructure (e.g., Internet, intranet, extranet, etc.), including undertaking singlepoints-of-failure analysis. As part of considering implementation of continuous availability, the organization should examine the resiliency of its network infrastructure and the components thereof, including the capability of its infrastructure management systems to handle network faults, network configuration and change, the ability to monitor network availability, and the ability of individual network components to handle capacity requirements. See Exhibit 7 for an example pictorial representation of this methodology. The CA methodological approach is a systematic way to consider and move forward in achieving a Web-based environment. A very high-level overview of this methodology is as follows. • Assessment/planning. During this phase, the enterprise should endeavor to understand the current state of business process owner expectations/requirements and the components of the technological infrastructure that support Web-based business processes. Utilizing both interview techniques (people to people) and existing system and network automated diagnoses tools will assist in understanding availability status and concerns. • Design. Given the results of the current state assessment, design the continuous availability strategy and implementation/migration plans. This will include developing a Web-based infrastructure classification system to be used to classify the governance processes used for granting access to and use of support for Web-based resources. 373

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE

Assessment/ Planning

Business Process Owner Needs and Expectations

Infrastructure Infrastructure Network Stability Management Resiliency Audit Assessment Assessment

SLA Review and Assessment

Infrastructure Availability Assessment Strategy

Design

Implementation

Operations/ Monitoring

Exhibit 7.

CA Design and Migration Plan

CA Infrastructure

Operational CA Infrastructure

Continuous Availability Methodological Approach

• Implementation. Migrate existing infrastructures to the Web-based environment according to design specifications as determined during the design phase. • Operations/monitoring. Establish operational monitoring techniques and processes for the ongoing administration of the Web-based infrastructure. Along these lines, in their book Blueprints for High Availability: Designing Resilient Distributed Systems,3 Marcus and Stern recommend several fundamental rules for maximizing system availability (paraphrased): • Spend money…but not blindly. Because quality costs money, investing in an appropriate degree of resiliency is necessary. • Assume nothing. Nothing comes bundled when it comes to continuous availability. End-to-end system availability requires up-front planning and cannot simply be bought and dropped in place. • Remove single-points-of-failure. If a single link in the chain breaks, regardless of how strong the other links are, the system is down. Identify and mitigate single-points-of-failure. • Maintain tight security. Provide for the physical, environmental, and information security of Web-based infrastructure components.

374

Reengineering the Business Continuity Planning Process • Consolidate servers. Consolidate many small servers’ functionality onto larger servers and less numerous servers to facilitate operations and reduce complexity. • Automate common tasks. Automate the commonly performed systems tasks. Anything that can be done to reduce operational complexity will assist in maintaining high availability. • Document everything. Do not discount the importance of system documentation. Documentation provides audit trails and instructions to present and future systems operators on the fundamental operational intricacies of the systems in question. • Establish service level agreements (SLAs). It is most appropriate to define enterprise and service provider expectations ahead of time. SLAs should address system availability levels, hours of service, locations, priorities, and escalation policies. • Plan ahead. Plan for emergencies and crises, including multiple failures in advance of actual events. • Test everything. Test all new applications, system software, and hardware modifications in a production-like environment prior to going live. • Maintain separate environments. Provide for separation of systems, when possible. This separation might include separate environments for the following functions: production, production mirror, quality assurance, development, laboratory, and disaster recovery/business continuity site. • Invest in failure isolation. Plan — to the degree possible — to isolate problems so that if or when they occur, they cannot boil over and affect other infrastructure components. • Examine the history of the system. Understanding system history will assist in understanding what actions are necessary to move the system to a higher level of resiliency in the future. • Build for growth. A given in the modern computer era is that system resource reliability increases over time. As enterprise reliance on system resources grow, the systems must grow. Therefore, adding systems resources to existing reliable system architectures requires preplanning and concern for workload distribution and application leveling. • Choose mature software. It should go without saying that mature software that supports a Web-based environment is preferred over untested solutions. • Select reliable and serviceable hardware. As with software, selecting hardware components that have demonstrated high mean times between failures is preferable in a Web-based environment. • Reuse configurations. If the enterprise has stable system configurations, reuse or replicate them as much as possible throughout the en375

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE vironment. The advantages of this approach include ease of support, pretested configurations, a high degree of confidence for new rollouts, bulk purchasing possible, spare parts availability, and less to learn for those responsible for implementing and operating the Web-based infrastructure. • Exploit external resources. Take advantage of other organizations that are implementing and operating Web-based environments. It is possible to learn from others’ experiences. • One problem, one solution. Understand, identify, and utilize the tools necessary to maintain the infrastructure. Tools should fit the job; so obtain them and use them as they were designed to be used. • KISS: keep it simple…. Simplicity is the key to planning, developing, implementing, and operating a Web-based infrastructure. Endeavor to minimize Web-based infrastructure points of control and contention, as well as the introduction of variables. Marcus and Stern’s book3 is an excellent reference for preparing for and implementing highly available systems. Reengineering the continuity planning process involves not only reinvigorating continuity planning processes, but also ensuring that Web-based enterprise needs and expectations are identified and met through the implementation of continuous availability disciplines. SUMMARY The failure of organizations to measure the success of their CP implementations has led to an endless cycle of plan development and decline. The primary reason for this is that a meaningful set of CP measurements has not been adopted to fit the organization’s future-state goals. Because these measurements are lacking, expectations of both top management and those responsible for CP often go unfulfilled. A radical change in the manner in which organizations undertake CP implementation is necessary. This change should include adopting and utilizing the business process improvement (BPI) approach for CP. This BPI approach has been implemented successfully at many Fortune 1000 companies over the past 20 years. Defining CP as a process, applying the concepts of the CP value journey, expanding CP measurements utilizing the CP balanced scorecard, and exercising the organizational change management (OCM) concepts will facilitate a radically different approach to CP. Finally, because Web-based business processes require 247 uptime, implementation of continuous availability disciplines is necessary to ensure that the CP process is as fully developed as it should be. 376

Reengineering the Business Continuity Planning Process References 1. Robert S. Kaplan and David P. Norton, Translating Strategy into Action: The Balanced Scorecard, HBS Press, 1996. 2. Gartner Group RAS Services, COM-12-1325, 29 September 2000. 3. Marcus, E. and Stern, H., Blueprints for High Availability: Designing Resilient Distributed Systems, John Wiley & Sons, 2000.

377

This page intentionally left blank

Chapter 31

Wireless Security: Here We Go Again Aldora Louw William A. Yarberry, Jr.

Ronald Reagan’s famous rejoinder in the 1980 presidential debates — “There you go again” — applies equally well to wireless security. In the early days of personal computers, professional IT staff were alarmed at the uncontrolled, ad hoc, and unsecured networks that began to spring up. PCs were bought by users out of “miscellaneous supplies” budgets. The VP of Information Systems had no reliable inventory of these new devices; and certainly corporate data was not particularly secure or backed up on the primitive hard drives. Now, 20 years later, we have architectures and systems to control traditional networked systems. Unfortunately, history is repeating itself with wireless LANs and wireless applications. It is convenient to set up a wireless LAN or an application that uses wireless technology; however, the convenience means that sometimes wireless technology is spreading throughout organizations without oversight or adequate security functions. CIOs today, like VPs of Information Systems 20 years ago, are missing key information. Where are the wireless devices? Are they secure? Exacerbating the problem of wireless security is the general lack of awareness of the risks. Interception and even spoofing are easier over the airwaves than with cables — simply because it is not necessary to get physical access to a conduit in order to tap into the information flow. BACKGROUND Like old cities developed around cow-paths, wireless technology meanders around a confusing history of regulations, evolving and proprietary standards, a plethora of protocols, and ever smaller and faster hardware. To simplify this discussion, a “wireless” transmission is one that does not travel through a wire. This approach is not as dull-witted as it would seem. The media focus so much on the newer technologies, such as Blue0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

379

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE

Exhibit 1. IntelPro Wireless 2011B LAN Access Point (Courtesy of Intel (www.intel.com))

tooth, that traditional wireless communications — microwave, satellite, and radio transmissions — are often ignored. Regardless of the precise definition of “wire-less,” it is clear that the technology is growing quickly. Following are some of the key protocols and standards that are driving the industry. Standards and Protocols (Software). 802.11b (802.11a and 802.11g). This specification is today’s choice for wireless LAN communications. Based on work by the IEEE, 802.11b uses radio frequencies to transmit higher-level protocols, such as IP. Wireless LANs are convenient and quick to set up. Applications, servers, and other devices see the traffic going over the air-waves as no different than wirebased Ethernet packets. In a typical wireless LAN, a transmitter/receiver device, such as that shown in Exhibit 1, connects to the wired network at a fixed location. An alternative to the fixed access point is the ad hoc network that uses devices such as notebook PCs equipped with wireless adaptor cards to communicate with each other via peer-to-peer transmissions. The recently adopted 802.11a and 802.11g standards, the successors to 802.11b, allow for considerably greater bandwidth (up to 54 Mbps versus 11 Mbps originally). With this increase in bandwidth, wireless LANs will likely become much more prevalent. Using the appropriate access points and directional antennas, wireless LANs can be linked over more than a mile. As discussed later, it is not difficult to see why “war driving” around the premises of buildings is so popular with hackers. iMode. To date, the iMode service of DoCoMo is used almost exclusively in Japan. However, it is a bellwether for the rest of the world. Using propri380

Wireless Security: Here We Go Again etary (and unpublished) protocols, iMode provides text messaging, E-commerce, Web browsing, and a plethora of services to Japanese customers. Another advantage to the service is that it is a packet-switched service and thus, always on. What has grabbed the business community’s full attention is the degree of penetration within Japan — 28 million iMode users out of a total of 60 million cellular subscribers. Japanese teenagers have created a pseudo-language of text codes that rivals the cryptic language of Internet chat rooms (brb for “be right back,” etc.). Bluetooth. Intended as a short-distance (generally less than ten meters) communication standard, Bluetooth allows many devices to communicate with each other on an ad hoc basis, forming a “pico-net.” For example, PDAs can communicate with properly equipped IP telephones to transfer voice-mail to the PDA when the authorized owner walks into her office. Bluetooth is a specification that, when followed by manufacturers, allows devices to emit radio signals in the unlicensed 2.4-GHz frequency. By using spread spectrum, full-duplex signals at up to 1600 hops per second, interference is greatly reduced, allowing up to seven simultaneous connections in one location. It is intended to be used by laptops, digital cameras, PDAs, devices in automobiles, and other consumer devices. Because of its short range, interception from outside a building is difficult (not to mention the additional effort required to overcome frequency hopping). Nevertheless, there are scenarios that could result in security breaches. For example, transmission between a Bluetooth wireless headset and a base cellular phone could be intercepted as an executive walks through an airport. Cellular. Standards for mobile wireless continue to evolve. The United States and parts of South America originally used the AMPS analog system; this system is not secure at all, much to the chagrin of some embarrassed politicians. More current protocols include TDMA, CDMA, and the world standard GSM. GSM now supports broadband digital data transmission rates using general packet radio services (GPRS) and CDMA providers are offering advanced data services using a technology called CDMA2000 1x. Miscellaneous, Older Wireless Technologies. Any consideration of wireless security should include older technologies such as satellite communications (both geostationary and low earth orbit), microwave, infrared (line of sight, building to building), and CDPD for narrowband data transmission over unused bandwidth in the cellular frequencies and cordless phones operating in a number of public frequencies (most recently 900 MHz and 2.4 GHz). It is important to note that — particularly in telecommunications — technologies never seem to die. Any complete review of wireless security should at least consider these older, sometimes less secure transmission media. 381

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE Hardware Personal Digital Assistants. Palm Pilots, iPAQs, Blackberrys, and other devices are proliferating. Typically, they use frequencies somewhere in the cellular range and require sufficient tower (transmitter) density to work well. For example, ordering a book on amazon.com using a Palm Pilot is not likely to work in Death Valley, California. Laptops with Wireless Adaptor Cards. Using adaptor cards or wireless connections like Compaq’s Bluetooth Multiport Module, laptops communicate with each other or with servers linked to an access point. Cell Phones. The technologies of cellular phones, PDAs, dictation machines, and other devices are merging. Cell phones, especially those with displays and mini-browsers, provide the form factor for Internet, public telephone system, and short-range communications.

The ISO Stack Still Applies The ubiquitous, seven-layer ISO stack applies to wireless communications as well. Although a discussion of this topic is outside the scope of this chapter, there is one protocol stack concept that should be kept in mind: traveling over the air is the logical equivalent to traveling over a copper wire or fiber. Airwave protocols represent layer 2 protocols, much like Frame Relay or ATM.1 If a TCP/IP layer 3 link is established over a wireless network, it is still an IP network. It merely rides over a protocol designed for transmission in the air rather than through copper atoms or light waves. Hence, many of the same security concepts historically applied to IP networks, such as authentication, non-repudiation, etc., still apply. WIRELESS RISKS A January 2002 article in Computerworld described how a couple of professional security firms were able to easily intercept wireless transmissions at several airports. They picked up sensitive network information that could be used to break in or to actually establish a rogue but authorized node on the airline network. More threatening is the newly popular “war driving” hobby of today’s au courant hackers. Using an 802.11b-equipped notebook computer with appropriate software, hackers drive around buildings scanning for 802.11b access points. The following conversation, quoted from a newsgroup for wireless enthusiasts in the New York City area, illustrates the level of risk posed by war driving: Just an FYI for everyone, they are going to be changing the nomenclature of ‘War Driving’ very soon. Probably to something like ‘ap map382

Wireless Security: Here We Go Again ping’ or ‘net stumbling’ or something of the sort. They are trying to make it sound less destructive, intrusive and illegal, which is a very good idea. This application that is being developed by Marius Milner of BAWUG is great. I used it today. Walking around in my neighborhood (Upper East Side Manhattan) I found about 30 access points. A company called www.rexspeed.com is setting up access points in residential buildings. Riding the bus down from the Upper East Side to Bryant park, I found about 15 access points. Walking from Bryant Park to Times Square, I found 10 access points. All of this was done without any external antenna. In general, 90 percent of these access points are not using WEP. Fun stuff.

The scanning utility referred to above is the Network Stumbler, written by Marius Milner. It identifies MAC addresses (physical hardware addresses), signal-to-noise ratios, and SSIDs.2 Security consultant Rich Santalesa points out that if a GPS receiver is added to the notebook, the utility records the exact location of the signal. Many more examples of wireless vulnerability could be cited. Looking at these wide open links reminds us of the first days of the Internet when the novelty of the technology obscured the risks from intruders. Then, as now, the overriding impediment to adequate security was simple ignorance of the risks. IT technicians and sometimes even knowledgeable users set up wireless networks. Standard — but optional — security features such as WEP (Wired Equivalent Privacy) may not be implemented. Viewing the handheld or portable device as the weak sibling of the wireless network is a useful perspective. As wireless devices increase their memory, speed, and operating system complexity, they will only become more vulnerable to viruses and rogue code that can facilitate unauthorized transactions. The following sections outline some defenses against wireless hacking and snooping. We start with the easy defenses first, based on security consultant Don Parker’s oft-repeated statement of the obvious: “Prudent security requires shutting the barn doors before worrying about the rat holes.” DEFENSES Virtually all the security industry’s cognoscenti agree that it is perfectly feasible to achieve a reasonable level of wireless security. And it is desperately needed — for wireless purchases, stock transactions, transmissions of safety information via wireless PDA to engineers in hazardous environments, and other activities where security is required. The problems come from lack of awareness, cost to implement, competing standards, and leg383

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE acy equipment. Following are some current solutions that should be considered if the business exposure warrants the effort. Awareness and Simple Procedures First, make management, IT and telecom personnel, and all users aware that wireless information can be intercepted and used to penetrate the organization’s systems and information. In practical terms, this means: • Obtain formal approval to set up wireless LANs and perform a security review to ensure WEP or other security measures have been put in place. • Limit confidential conversations where security is notoriously lax. For example, many cellular phones are dual mode and operate on a completely unsecured protocol/frequency in areas where only analog service is available. Some cell phones have the ability to disable dual mode so they only operate in the relatively more secure digital mode. • Use a password on any PDA or similar device that contains sensitive data. An even stronger protection is to encrypt the data. For example, Certicom offers the MovianCrypt security package, which uses a 128bit advanced encryption standard to encrypt all data on a PDA. • Ensure that the security architecture does assume that the end device (e.g., a laptop) will always be in physical possession of the authorized owner. Technical Solutions There are several approaches to securing a wireless network. Some, like WEP, focus on the nature of wireless communication itself. Others use tunneling and traditional VPN (virtual private network) security methods to ensure that the data is strongly encrypted at the IP layer. Of course, like the concentric walls of medieval castles, the best defense includes multiple barriers to access. Start with WEP, an optional function of the IEEE 802.11 specification. If implemented, it works by creating secret shared encryption keys. Both source and destination stations use these keys to alter frame bits to avoid disclosure to eavesdroppers. WEP is designed to provide the same security for wireless transmissions as could be expected for communications via copper wire or fiber. It was never intended to be the Fort Knox of security systems. WEP has been criticized because it sends the shared secret over the airwaves; sniffers can ferret out the secret and compromise the system. Some Berkley researchers broke the 40-bit encryption relatively quickly after the IEEE released the specification. WEP also has a few other weaknesses, including: 384

Wireless Security: Here We Go Again

Exhibit 2. Two-Factor Authentication from RSA (Courtesy of RSA (www.rsasecurity.com))

• Vendors have added proprietary features to their WEP implementation, making integration of wireless networks more difficult. • Anyone can pick up the signal, as in the “war driving” scenario described above. This means that even if hackers do not want to bother decoding the traffic using a wireless sniffer — which is somewhat difficult — they can still get onto the network. That is, they are plugged in just the same as if they took their laptop into a spare office and ran a cable to the nearest Ethernet port. A partial solution is to enable MAC address monitoring.3 By adding MAC addresses (unique to each piece of hardware, such as a laptop) to the access point device, only those individuals possessing equipment that matches the MAC address table can get onto the network. However, it is difficult to scale the solution because the MAC address tables must be maintained manually. None of these deficiencies should discourage one from implementing WEP. Just implementing WEP out-of-the-box will discourage many hackers. Also, WEP itself is maturing, taking advantage of the increased processing power available on handheld and portable devices to allow more compute intensive security algorithms. As mentioned, authentication of laptops and other devices on the user end is as important in wireless as it is in dial-up remote access. It is beyond human diligence not to lose or have stolen a portable device. VPNs with remote, two-factor authentication superimpose a layer of security that greatly enhances any native wireless protection system. RSA’s SecurID, shown in Exhibit 2, is an example of a two-factor system based on some385

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE thing the user knows (a password) and something the user possesses (an encrypted card). In addition to VPNs, software-based firewalls such as BlackICE are useful for end-computer security. Relying on tools such as these takes some of the pressure off application level security, which is sometimes weak due to loose password management, default passwords, and other flaws. A Hole in the Fabric of Wireless Security Wireless security protocols are evolving. WAP 1.2.1 uses Wireless Transport Layer Security (WTLS), which very effectively encrypts communications from, for example, a cell phone to a WAP gateway. At the WAP gateway, the message must be momentarily unencrypted before it is sent onto the Web server via SSL (Secure Sockets Layer). This “WAP GAP” exists today but is supposed to be eliminated in WAP version 1.3 or later. A temporary fix is to strengthen physical security around the WAP gateway and add additional layers of security onto the higher-level applications. Traditional Security Methods Still Work Of course, existing security methods still apply — from the ancient Spartan’s steganography techniques (invisible messages) to the mind-numbing complex cryptography algorithms of today. Following are some major security algorithms that can easily support a high level of E-commerce security: • Digital hashing: a lower-strength security technique to help prevent unauthorized changes to documents transmitted electronically • Digital signatures: provide the same function as digital hashing but are a much more robust algorithm • Public key cryptography: the cornerstone of much digital-age security (key management, such as the use of smart cards, is important in the various implementations of public key infrastructure (PKI)) Auditing Wireless Security Auditing an organization’s wireless security architecture is not only useful professionally, but also an excellent personal exercise. The reason: physically walking around the premises with a wireless LAN audit tool is necessary to determine where wireless LANs and other wireless networks have been set up. Often, these LANs have been implemented without approval or documentation and, hence, a documentation review is not sufficient. Using a device such as IBM’s Wireless Security Auditor, a reliable inventory of wireless networks and settings can be obtained (see Exhibit 3). 386

Wireless Security: Here We Go Again

Exhibit 3.

Wireless Security Auditor Tool (Courtesy IBM (www.research.ibm.com/gsal/wsa/))

Using IBM’s Wireless Security Auditor as an example, the following are some of the configurations and potential vulnerabilities that might be evaluated in a wireless security review: • • • • •

Inventory of access points Identification of encryption method (if any) Identification of authentication method Determination of WEP status (has it been implemented?) Notation of any GPS information (useful for determining location access point) • Analytics on probe packets • Identification of firmware status (up-to-date?) Aside from the technology layer, standard IT/telecom controls should be included in the review: change control, documentation, standards compliance, key management, conformance with technical architecture, and appropriate policies for portable devices. 387

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE SUMMARY Wireless security is important to both leading and trailing-edge organizations. Applications and infrastructure uses are showing up everywhere, from the shop floor to the techie whose Bluetooth PDA collects his voicemail as he walks into the office. This rapid growth, reminiscent of the first days of PCs and the Internet, should be accompanied by a corresponding level of security, control, and standards. Here we go again…. Notes 1. In a sense, “air” also represents layer 1, the most basic and physical layer. Copper, fiber, and even (in the earliest days of the telegraph) barbed wire stand as examples of layer 1 media. 2. Service Set Identifier. An encoded flag attached to packets sent over a wireless LAN that indicates it is authorized to be on a particular radio network. All wireless devices on the same radio network must have the same SSID or they will be ignored. 3. MAC (medium access control) addresses are unique. “03:35:05:36:47:7a” is a sample MAC address that might be found on a wireless or wired LAN.

388

Chapter 32

Understanding Intrusion Detection Systems Peter Mell

Intrusion detection is the process of detecting an unauthorized use of, or attack upon, a computer or a telecommunication network. Intrusion detection systems (IDSs) are designed and installed to aid in deterring or mitigating the damage that can be caused by hacking, or breaking into sensitive IT systems. IDSs are software or hardware mechanisms that detect such misuse. IDSs can detect attempts to compromise the confidentiality, integrity, and availability of a computer or network. The attacks can come from outsider attackers on the Internet, authorized insiders who misuse the privileges that have been given them, and unauthorized insiders who attempt to gain unauthorized privileges. IDSs cannot be used in isolation, but must be part of a larger framework of IT security measures. THE BASIS FOR ACQUIRING IDSs At least three reasons justify the acquisition of an IDS. They are: 1. To provide the means for detecting attacks and other security violations that cannot be prevented 2. To prevent attackers from probing a network 3. To document the intrusion threat to an organization Detecting Attacks That Cannot Be Prevented Using well-known techniques, attackers can penetrate many networks. Often, this happens when known vulnerabilities in the network cannot be fixed. For example, in many legacy systems, the operating systems cannot be updated; in those systems that can be updated, the administrators may not have, or take, the time to install all the necessary patches in a large number of hosts. In addition, it is usually impossible to map perfectly an 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

389

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE organization’s computer use policy to its access control mechanisms. Authorized users can often perform unauthorized actions. In addition, users may demand network services and protocols that are known to be flawed and subject to attack. Although ideally it would be preferable to fix all of the vulnerabilities, this is seldom possible. Thus, an excellent approach to protecting a network may be the use of an IDS to detect when an attacker has penetrated a system using the vulnerability that can be created by an uncorrectable flaw. At least it is better to know that a system has been penetrated so that its administrators can perform damage control and recovery than not to know that the system has been penetrated. Preventing Attackers from Probing a Network A computer or network without an IDS may allow attackers to explore its weaknesses, leisurely and without retribution. If a single, known vulnerability exists in such a network, a determined attacker will eventually find and exploit it. The same network in which an IDS has been installed is a much more formidable challenge to an attacker. Although the attacker may continue to probe the network for weaknesses, the IDS should detect these attempts. In addition, the IDS can block these attempts and it can alert IT security personnel who can then take appropriate action in response to the probes. Documenting the Threat It is important to verify that a network is under attack or is likely to be attacked in order to justify spending money for securing the network. Furthermore, it is important to understand the frequency and characteristics of attacks to understand what security measures are appropriate for the network. IDSs can itemize, characterize, and verify the threats from both outside and inside attacks. Thus, the operation of IDSs can provide a sound foundation for IT security expenditures. Using IDSs in this manner is important because many people believe — and mistakenly so — that no one would be interested in breaking into their networks. (Typically, this type of mistaken thinking makes no distinction between threats from either outsiders or insiders.) TYPES OF IDSs There are several types of IDSs available. They are characterized by different monitoring and analysis approaches. Each type has distinct uses, advantages, and disadvantages. IDSs can monitor events at three different levels: network, host, and application. They can analyze these events using two techniques: signature detection and anomaly detection. Some IDSs have the ability to respond automatically to attacks that are detected. 390

Understanding Intrusion Detection Systems IDS MONITORING APPROACHES One way to define the types of IDSs is to look at what they monitor. Some IDSs listen on network backbones and analyze network packets to find attackers. Other IDSs reside on the hosts that they are defending and monitor the operating system for signs of intrusion. Still others monitor individual applications. Network-Based IDSs Network-based IDSs are the most common type of commercial product offering. These mechanisms detect attacks by capturing and analyzing network packets. Listening on a network backbone, a single network-based IDS can monitor a large amount of information. Network-based IDSs usually consist of a set of single-purpose hosts that “sniff” or capture network traffic in various parts of a network and report attacks to a single management console. Because no other applications run on the hosts that are used by a network-based IDS, they can be secured against attack. Many of them have “stealth” modes, which make it extremely difficult for an attacker to detect their presence and to locate them.1 • Advantages. A few well-placed network-based IDSs can monitor a large network. The deployment of network-based IDSs has little impact on the performance of an existing network. Network-based IDSs are typically passive devices that listen on a network wire without interfering with normal network operation. Thus, usually, it is easy to retrofit a network to include network-based IDSs with a minimal installation effort. Network-based IDSs can be made very secure against attack and can even be made invisible to many attackers. • Disadvantages. Network-based IDSs may have difficulty processing all packets in a large or busy network. Therefore, such mechanisms may fail to recognize an attack that is launched during periods of high traffic. IDSs that are implemented in hardware are much faster than those that are based on a software solution. In addition, the need to analyze packets quickly forces vendors to try and detect attacks with as few computing resources as possible. This may reduce detection effectiveness. Many of the advantages of network-based IDSs do not always apply to the more modern switch-based networks. Switches can subdivide networks into many small segments; this will usually be implemented with one fast Ethernet wire per host. Switches can provide dedicated links between hosts that are serviced by the same switch. Most switches do not provide universal monitoring ports. This reduces the monitoring range of a network-based IDS sensor to a single host. In switches that do provide such monitoring ports, the single port is frequently unable to mirror all the traffic that is moving through the switch. 391

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE Network-based IDSs cannot analyze encrypted information. Increasingly, this limitation will become a problem as the use of encryption, both by organizations and by the attackers, increases. Most network-based IDSs do not report whether or not an attack was successful. These mechanisms only report that an attack was initiated. After an attack has been detected, administrators must manually investigate each host that has been attacked to determine which hosts were penetrated. Host-Based IDSs Host-based IDSs analyze the activity on a particular computer. Thus, they must collect information from the host they are monitoring. This allows an IDS to analyze activities on the host at a very fine granularity and to determine exactly which processes and users are performing malicious activities on the operating system. Some host-based IDSs simplify the administration of a set of hosts by having the administration functions and attack reports centralized at a single IT security console. Others generate messages that are compatible with network administration systems. • Advantages. Host-based IDSs can detect attacks that are not detectable by a network-based IDS because this type has a view of events that are local to a host. Host-based IDSs can operate in a network that is using encryption when the encrypted information is decrypted on (or before) reaching the host that is being monitored. Host-based IDSs can operate in switched networks. • Disadvantages. The collection mechanisms must usually be installed and maintained on every host that is to be monitored. Because portions of these systems reside on the host that is being attacked, hostbased IDSs may be attacked and disabled by a clever attacker. Hostbased IDSs are not well-suited for detecting network scans of all the hosts in a network because the IDS at each host sees only the network packets that the host receives. Host-based IDSs frequently have difficulty detecting and operating in the face of denial-of-service attacks. Host-based IDSs use the computing resources of the hosts they are monitoring. Application-Based IDSs Application-based IDSs monitor the events that are transpiring within an application. They often detect attacks by analyzing the application’s log files. By interfacing with an application directly and having significant domain or application knowledge, application-based IDSs are more likely to have a more discerning or fine-grained view of suspicious activity in the application. • Advantages. Application-based IDSs can monitor activity at a very fine level of granularity, which allows them, often, to track unauthorized 392

Understanding Intrusion Detection Systems activity to individual users. Application-based IDSs can work in encrypted environments because they interface with the application that may be performing encryption. • Disadvantages. Application-based IDSs may be more vulnerable than host-based IDSs to being attacked and disabled because they run as an application on the host that they are monitoring. The distinction between an application-based IDS and a host-based IDS is not always clear. Thus, for the remainder of this chapter, both types will be referred to as host-based IDSs. IDS EVENT ANALYSIS APPROACHES There are two primary approaches to analyzing computer and networks events to detect attacks: signature detection and anomaly detection. Signature detection is the primary technique used by most commercial IDS products. However, anomaly detection is the subject of much research and is used in limited form by a number of IDSs. Signature-Based IDSs Signature-based detection looks for activity that matches a predefined set of events that uniquely describe a known attack. Signature-based IDSs must be specifically programmed to detect each known attack. This technique is extremely effective and is the primary method used in commercial products for detecting attacks. • Advantages. Signature-based IDSs are very effective in detecting attacks without generating an overwhelming number of false alarms. • Disadvantages. Signature-based IDSs must be programmed to detect each attack and thus must be constantly updated with the signatures of new attacks. Many signature-based IDSs have narrowly defined signatures that prevent them from detecting variants of common attacks. Anomaly-Based IDSs Anomaly-based IDSs find attacks by identifying unusual behavior (i.e., anomalies) that occurs on a host or network. They function on the observation that some attackers behave differently than “normal” users and thus can be detected by systems that identify these differences. Anomalybased IDSs establish a baseline of normal behavior by profiling particular users or network connections and then statistically measure when the activity being monitored deviates from the norm. These IDSs frequently produce a large number of false alarms because normal user and network behaviors can vary widely. Despite this weakness, the researchers working on applying this technology assert that anomaly-based IDSs are able to detect never-before-seen attacks, unlike signature-based IDSs that rely on 393

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE an analysis of past attacks. Although some commercial IDSs include restricted forms of anomaly detection, few, if any, rely solely on this technology. However, research on anomaly detection IDS products continues. • Advantages. Anomaly-based IDSs detect unusual behavior and thus have the ability to detect attacks without having to be specifically programmed to detect them. • Disadvantages. Anomaly detection approaches typically produce a large number of false alarms due to the unpredictable nature of computing and telecommunication users and networks. Anomaly detection approaches frequently require extensive “training sets” of system event records to characterize normal behavior patterns. IDSs THAT AUTOMATICALLY RESPOND TO ATTACKS Because human administrators are not always available when an attack occurs, some IDSs can be configured to automatically respond to attacks. The simplest form of automated response is active notification. Upon detecting an attack, an IDS can e-mail or page an administrator. A more active response is to stop an attack in progress and then block future access by the attacker. Typically, IDSs do not have the ability to block a particular person, but instead block the Internet Protocol (IP) addresses from which an attacker is operating. It is very difficult to automatically stop a determined and knowledgeable attacker. However, IDSs can often deter expert attackers or stop novice hackers by: • Cutting TCP (Transmission Control Protocol) connections by injecting reset packets into the attacker’s connections that go to the target of the attack • Reconfiguring routers and firewalls to block packets from reaching the attacker’s location (i.e., the IP address or site) • Reconfiguring routers and firewalls to block the protocols that are being used by an attacker • Reconfiguring routers and firewalls to sever all the connections, in extreme situations, that are using particular network interfaces A more aggressive way in which to respond to an attacker is to launch attacks against, or attempt to gain information actively about, the attacker’s host or site. However, this type of response can prove extremely dangerous for an organization to undertake because doing so may be illegal or may cause damage to innocent Internet users. It is even more dangerous to allow IDSs to launch these attacks automatically, but limited, automated “strike-back” strategies are sometimes used for critical systems. (It would be wise to obtain legal advice before pursuing any of these options.) 394

Understanding Intrusion Detection Systems TOOLS THAT COMPLEMENT IDSs Several tools exist that complement IDSs and are often labeled as IDSs by vendors because they perform functions that are similar to those accomplished by IDSs. These complementary tools are honey pot systems, padded cell systems, and vulnerability assessment tools. It is important to understand how these products differ from conventional IDSs. Honey Pot and Padded Cell Systems Honey pots are decoy systems that attempt to lure an attacker away from critical systems. These systems are filled with information that is seemingly valuable but which has been fabricated and which would not be accessed by an honest user. Thus, when access to the honey pot is detected, there is a high likelihood that it is an attacker. Monitors and event loggers on the honey pot detect these unauthorized accesses and collect information about an attacker’s activities. The purpose of the honey pot is to divert an attacker from accessing critical systems, collect information about the attacker’s activity, and encourage the attacker to stay on the system long enough for administrators to respond to the intrusion. Padded cells take a different approach. Instead of trying to attract attackers with tempting data, a padded cell waits for a traditional IDS to detect an attacker. The attacker is seamlessly transferred to a special padded cell host. The attacker may not realize anything has happened, but is now in a simulated environment where no harm can be caused. Similar to the honey pot, this simulated environment can be filled with interesting data to convince an attacker that the attack is going according to plan. Padded cells offer unique opportunities to monitor the actions of an attacker. IDS researchers have used padded cell and honey pot systems since the late 1980s, but until recently no commercial products have been available. • Advantages. Attackers can be diverted to system targets that they cannot damage. Administrators can be given time to decide how to respond to an attacker. An attacker’s actions can be monitored more easily and the results used to improve the system’s protections. Honey pots may be effective in catching insiders who are snooping around a network. • Disadvantages. Honey pots and padded cells have not been shown, as yet, to be widely useful security technologies. Once an expert attacker has been diverted into a decoy system, the invader may become angry and launch a more hostile attack against an organization’s systems. A high level of expertise is needed for administrators and security managers to use these systems. The legal implications of using such mechanisms are not well-defined. 395

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE Vulnerability Assessment Tools Vulnerability assessment tools determine when a network or host is vulnerable to known attacks. Because this activity is actually related to detecting attacks, these mechanisms are sometimes referred to as intrusion detection tools. They come in two varieties: passive and active. • Passive vulnerability assessment tools scan the host on which they reside for the presence of insecure configurations, software versions known to contain exploitable flaws, and weak passwords. • Active assessment tools reside on a single host and scan a network looking for vulnerable hosts. The tool sends a variety of network packets at target hosts and, from the responses, the tool can determine the server and operating system software on each host. In addition, it can identify specific versions of software and determine the presence or absence of security-related patches. The active assessment tool compares this information with a library of software version numbers known to be insecure and determines if the hosts are vulnerable to known attacks from these sources. LIMITATIONS OF IDSs Intrusion detection products have limitations that one must be aware of before endeavoring to deploy an IDS. Despite vendor claims to the contrary, most IDSs do not scale well as enterprisewide solutions. The problems include the lack of sufficient integration with other IT security tools and sophisticated network administration systems, the inability of IDSs to assess and visualize enterprise-level threats, and the inability of organizations to investigate the large number of alarms that can be generated by hundreds or thousands of IDS sensors. 1. Many IDSs create a large number of false positives that waste administrators’ time and may even initiate damaging automated responses. 2. While almost all IDSs are marketed as real-time systems, during heavy network or host activity, an IDS may take several minutes before it reports and responds to an attack automatically. Usually, IDSs cannot detect newly published attacks or variants of existing attacks. This can be a serious problem as 30 to 40 new attacks are posted on the World Wide Web every month. An attacker may wait for a new attack to be posted and then quickly penetrate a target network. 3. Automated responses of IDSs are often ineffective against sophisticated attackers. These responses usually stop novice hackers but if they are improperly configured, these reactions can harm a network by interrupting its legitimate traffic. 396

Understanding Intrusion Detection Systems 4. IDSs must be monitored by skilled IT security personnel to achieve maximum benefits from their operation and to understand the significance of what is detected. IDS maintenance and monitoring can require a substantial amount of personnel resources. 5. Many IDSs are not failsafe. They are not well-protected from attack or subversion. 6. Many IDSs do not have interfaces that allow users to discover cooperative or coordinated attacks. DEPLOYMENT OF IDSs Intrusion detection technology is a necessary addition to every large organization’s IT security framework. However, given the weaknesses that are found in some of these products, and the relatively limited security skill level of most system administrators, careful planning, preparation, prototyping, testing, and specialized training are critical steps for effective IDS deployment. It is suggested that a thorough requirements analysis be performed before IDSs are deployed. The intrusion detection strategy and solution selected should be compatible with the organization’s network infrastructure, policies, and resource level. Organizations should consider a staged deployment of IDSs to gain experience with their operation. Thus, they can ascertain how many monitoring and maintenance resources are required. There is a large variance in the resource requirements for each type of IDS. IDSs require significant preparation and ongoing human interaction. Organizations must have appropriate IT security policies, plans, and procedures in place so that the personnel involved will know how to react to the many and varied alarms that the IDSs will produce. A combination of network-based IDSs and host-based IDSs should be considered to protect an enterprisewide network. First deploy networkbased IDSs because they are usually the simplest to install and maintain. The next step should be to defend the critical servers with host-based IDSs. Honey pots should be used judiciously and only by organizations with a highly skilled technical staff willing to experiment with leading-edge technology. Currently, padded cells are available only as research prototypes. Deploying Network-Based IDSs There are many options for placing a network-based IDS and there are different advantages for each location. See Exhibit 1 for a listing of these options. 397

DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE Exhibit 1.

Placement of a Network-Based IDS

Location

Advantage

Behind each external firewall

Sees attacks that are penetrating the network’s perimeter defenses from the outside world

In front of an external firewall

Proves that attacks from the Internet are regularly launched against the network

On major network backbones

Detects unauthorized activity by those within a network and monitors a large amount of a network’s traffic

On critical subnets

Detects attacks on critical resources

Deploying Host-Based IDSs Once an organization has deployed network-based IDSs, the deployment of host-based IDSs can offer an additional level of protection. However, it can be time-consuming to install host-based IDSs on every host in an enterprise. Therefore, it is often preferable to begin by installing host-based IDSs on critical servers only. This placement will decrease the overall costs associated with the deployment and will allow the limited number of personnel available to work with the IDSs to focus on the alarms that are generated from the most important hosts. Once the operation and maintenance of host-based IDSs become routine, more IT security-conscious organizations may consider installing host-based IDSs on the majority of their hosts. In this case, it would be wise to purchase host-based systems that have an easy-to-use centralized supervision and reporting function because the administration of alert responses from a large set of hosts can be intimidating. Notes 1. Stealth modes make it extremely difficult for an attacker to detect their presence and to locate them.

398

Section 3

Providing Application Solutions

PROVIDING APPLICATION SOLUTIONS The increased complexity and variety of infrastructure components, the need to provide software solutions for both internal and external audiences, and today’s integration requirements are primary reasons for software development today being a highly challenging task. Some of the conceptual complexity was introduced when client/server architectures were adopted. However, today’s Web-based, highly distributed systems are often, in practice, even more complex because of the variety of client devices and services that need to be able to communicate with each other. Section 3 addresses the challenges related to provisioning application solutions in modern systems development and deployment environments. An underlying theme is the importance of following well-established software engineering and project management tools, techniques, and methods as relevant. The chapters in this section are organized into four topic areas: • • • •

New Tools and Applications Systems Development Approaches Project Management Software Quality Assurance

NEW TOOLS AND APPLICATIONS One of the most promising and widely accepted approaches to addressing the complexity in Web-based development is Web services. Chapter 33, “Web Services: Extending Your Web,” provides an introduction to the emerging technologies that enable Web services. A comprehensive example is used to illustrate the benefits an organization can achieve by utilizing Web services either for internal or external purposes. For example, application interfaces can be developed for external users to support revenuegenerating services or to collaborate with business partners. The chapter also provides a brief comparison of .NET and J2EE® as development environments. Chapter 34, “J2EE versus .NET: An Application Development Perspective,” continues the same theme by providing a more in-depth comparison between the two major component-based development architectures: .NET and J2EE. A comprehensive review and step-by-step comparison is provided to help decision makers analyze the differences between them. Although answers to technology choice questions are seldom black or white, this chapter provides useful guidance to managers who are looking for the best solution for their organization. Chapter 35, “XML: Information Interchange,” focuses on one of the most important technologies for developing modern application solutions. XML is both one of the core elements of Web services and a much more widely used mechanism for defining document structures. As a tool for specifying 400

PROVIDING APPLICATION SOLUTIONS the meaning of various document elements, XML was originally designed to address one of the most important weaknesses of HTML. Since its inception in the late 1990s, XML has been widely adopted for a variety of environments, including some of the most popular personal productivity software applications and large B2B E-business systems. The effective use of XML requires that cooperating organizations define document standards for particular data exchange purposes, and this chapter sheds light on the standardization efforts within and between industries that are as important as the XML standard itself. The final chapter within this topic area — Chapter 36, “Software Agent Orientation: A New Paradigm” — provides some excellent real-life examples of the use of software agents in organizations. The authors carefully avoid the hype and overpromises that are often associated with new technologies and realistically evaluate the possibilities that agent technologies offer for organizations that are willing to embark on a learning process. Among the applications of agent technologies discussed are e-mail filtering and routing, data warehousing, and Internet searches, as well as financial, distance education, and healthcare applications. SYSTEMS DEVELOPMENT APPROACHES Selected for this topic area are six chapters that discuss the role and usage of various systems development methods, techniques, and approaches. The first one is concerned with paradigm shifts within software development environments. Chapter 37, “The Methodology Evolution: From None, to One-Size-Fits-All, to Eclectic,” provides a historical overview of methodologies for systems development projects from an evolutionary perspective. Although methodologies in the past have sometimes been hailed as “the holy grail,” evidence from the field suggests that multiple methodologies are, in fact, typically customized for each development project. The author advocates an eclectic, problem-centered view of methodology, in contrast to a “one-size-fits-all” approach. Chapter 38, “Usability: Happier Users Mean Greater Profits,” focuses closely on the business value of usability, and clearly demonstrates why high usability should be a requisite objective of every systems development project. The author emphasizes the importance of a process to ensure the creation of usable systems and also builds a strong case to suggest that methods that rely on very simple, low-technology tools can often be used to develop systems with high usability. Throughout the chapter, the role of users in developing usable systems is a central theme. Chapter 39, “UML: The Good, the Bad, and the Ugly,” evaluates the advantages and disadvantages of the utilization of Unified Modeling Language (UML). Since its inception, UML has gained strong acceptance, par401

PROVIDING APPLICATION SOLUTIONS ticularly within those organizations that follow the object-oriented paradigm throughout all development stages. It has also become the basis for a number of development environments. The authors provide an objective review of UML and its organizational uses and discuss the characteristics of various modeling languages that are part of UML. Due to their prominent role in business process modeling, use cases are one of the most widely used modeling tools in UML. Chapter 40, “Use Case Modeling,” provides a tutorial of this approach, as well as some useful insights into some of the differences between various use case modeling approaches. Also discussed are the reasons why use case modeling has gained popularity over traditional process modeling approaches (such as data flow diagramming) and various criteria to consider when evaluating this approach for organizational adoption. Several agile (“lightweight”) development methodologies, particularly eXtreme Programming (XP), have recently received considerable attention. Proponents argue that these methodologies are a solution to many of the problems that continue to plague software development, whereas opponents find them little more than structured hacking. However, anecdotal evidence suggests that many organizations are increasingly applying either some or all of the core principles of one of the agile methodologies in some of their development projects. Chapter 41, “Extreme Programming and Agile Software Development Methodologies,” helps IS managers responsible for methodology standards in evaluating whether or not XP is suitable for their organization. The authors are clear proponents of XP, but the chapter also provides a useful evaluation of agile approaches for those who are not yet believers. Particularly interesting is the authors’ focus on the values of XP (Simplicity, Feedback, and Courage), which provide the foundation for the methodology itself. Modern software development increasingly entails some type of a component-based approach, and various architectures (such as CORBA, ActiveX, .NET, and J2EE) have been introduced to support the use of components. The final chapter of this section — Chapter 42, “ComponentBased IS Architecture” — provides guidance on a variety of issues related to software development using components. The authors point out the advances in distributed architectures and the Internet that have had a strong impact on the utilization of component technologies, and they discuss the organizational and technical requirements underlying the effective and economically viable use of components. PROJECT MANAGEMENT The importance of project management is one of the stable constants in the otherwise dynamic world of systems development. Development 402

PROVIDING APPLICATION SOLUTIONS approaches, methodologies, tools, and technologies change, but many of the core issues related to the successful management of projects remain the same. The four chapters under this topic provide guidance on various project management challenges. Project risk management is the main focus of Chapter 43, “Does Your Project Risk Management System Do the Job?” The chapter emphasizes the central role of risk management in ensuring project success, and provides a comprehensive list of common project risk management mistakes. Early recognition helps avoid these mistakes. The authors provide a review of the most common threats to IT projects and ways to mitigate them. The chapter ends with a useful checklist of risk management tasks for project managers to perform. The author of Chapter 44, “Managing Development in the Era of Complex Systems,” emphasizes the need for new skills for managing today’s more complex development projects. The chapter identifies three factors associated with success based on complex projects in a large consulting organization: business vision, system testing from a program management (versus single project) perspective, and a phased rollout strategy. Complexity is an unavoidable characteristic of large systems projects today, especially enterprise system projects that involve cross-functional integration. It is therefore vitally important to develop approaches to manage project complexity and related risks. The author of Chapter 45, “Reducing IT Project Complexity,” also focuses on mechanisms for managing and reducing project complexity. This chapter evaluates factors that increase project complexity, paying specific attention to coordination issues, and provides an analysis of the specific steps that an organization can take to control complexity and maintain it at an acceptable level. The author describes a wide range of factors affecting project risk: the scope and nature of the project, development technology, organizational structures, and culture. SOFTWARE QUALITY ASSURANCE Section 3 ends with three chapters that focus on software quality and its potential implications for the individual, the organization, and the society. Chapter 46, “Software Quality Assurance Activities,” provides a systematic review of software quality assurance activities during the entire software development life cycle. The chapter clearly demonstrates that software quality has to be built into the product; it cannot be achieved just by testing the system before it is delivered to the users. The author recognizes the importance of formal approaches such as the Software Engineering Institute’s Capability Maturity Model Integration (CMMI) and the ISO 9000 set of quality assurance standards, but the importance of looking beyond the 403

PROVIDING APPLICATION SOLUTIONS extensive documentation required by these guidelines, and ensuring that basic quality assurance practices are implemented through all stages of a development project, is emphasized. Because quality assurance requirements are not the same for every company and project, the approach chosen must be adjusted to fit the organizational context. The topical focus of Chapter 47, “Six Myths about Managing Software Development,” is broader than quality assurance, but many of the misconceptions identified are closely related to software quality. The author presents six “incorrect” assumptions that often guide the actions of IS managers and developers during development projects. Many of the myths are controversial, but all them are thought-provoking and will challenge readers to reevaluate some fundamental assumptions. The author concludes the chapter with several recommendations that, if implemented, can significantly improve the quality of the final software product. The final chapter in this section, Chapter 48, “Ethical Responsibility for Software Development,” reviews the consequences of software quality problems from the perspectives of both ethical and legal responsibility. The need for both organizations and individual developers to be ethically responsible is clearly established, and the legal liabilities associated with not disclosing known defects in software products are discussed. The authors also provide a useful introduction to the widely debated Uniform Computer Information Transactions Act (UCITA).

404

Chapter 33

Web Services: Extending Your Web Robert VanTol

DYNAMIC SITES TODAY Now that Web sites have moved beyond static marketing sites and simple E-commerce storefronts, corporations face the challenge of making their internal “protected” databases available to the world. Companies are being forced to open their systems to provide real-time transactional capabilities to customers and partners or risk losing market share to more accessible competitors. Until now, companies had to host their own Web site internally to achieve this so that they could attach directly to the corporate databases. This involved not only the need for additional hardware and increased communication capability, but also highly skilled IT resources to manage the site and the associated security risks. The question: Is there a Better Way? Web services allow the full separation of an Internet presence and company data, as shown in Exhibit 1. In the simplest case, a set of Web services can be created that allow viewing, updating, and adding records to internal systems. These services can be hosted within the company and made available to business partners through the Internet. The “public” Internet presence can be located anywhere and on any platform. The “public” site then communicates only with the Web services via HTTP, not with the database directly. This allows companies to host their “public” sites at an ISP that has the infrastructure, security, and knowledge to ensure efficient Web site operation. WEB SERVICES OVERVIEW Web services are modular, self-contained sets of business logic or functions that a company can make available and that can be described, published, located, and invoked through the Internet. Web services are the building blocks of a site, similar to the DLLs or COM objects on current sites, but with the added benefit of being able to be called from external Web servers across 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

405

PROVIDING APPLICATION SOLUTIONS

Exhibit 1. Network Diagram

the Internet. Potential Web services on a typical E-commerce site could include a tax calculation service, a shopping cart service, or an inventory control application. These Web services can be internal to an organization or, in some cases, provided by an external company, such as a bank that might develop a credit card clearing service. Exhibit 2 shows the Web services that are available in the sample application discussed below. The idea of creating software components is not new. In fact, it has been tried many times in the past. But what makes the Web services model exciting is that it is backed by industry leaders such as Microsoft, IBM, Oracle, Sun, as well as smaller vendors. Web services are based on how companies, systems, and people actually behave but based on a distributed Internet application framework. Existing COM or EJB objects can be wrapped and distributed with this technology, allowing greater reuse of existing code. By allowing legacy applications to be wrapped in SOAP and exposed as a service, the Web services architecture easily enables new interoperability between existing applications. Wrapping hides the underlying plumbing from the developer, which allows services to coexist in existing legacy applications and also be exposed to a wider audience as a Web service. Web services need to provide standard communication protocols, data representation, description languages, and a discovery mechanism. The major players have worked together to create a foundation for interoperability with a set of Web services standards such as XML, SOAP, UDDI, and WSDL, which are defined below. XML Extensible Mark-up Language (XML) is the foundation for Web services and was a standard long before Web services were developed. XML was chosen 406

Web Services: Extending Your Web

Exhibit 2. Application Architecture

in part to ensure that there existed tools and expertise in the marketplace although the concept of Web services was relatively new. Web services promote interoperability by minimizing the amount of shared understanding required. The XML-based service description is the lowest common denominator allowing them to be truly platform and language independent and implemented using a large variety of different underlying infrastructures. For Web services to overcome the deficiencies of its predecessors, it needed to be built on standard, widely supported protocols. XML treats all information as text, and an XML Schema makes it possible to describe the type and structure of XML documents using XML itself. Because the XML Schema is self-documenting, it is perfect for Web services where, depending on the platform, converting between data types will be required. The data is delivered as text but the type is defined in the schema, ensuring that even if the originating type is not supported by the calling platform, the data can still be read. SOAP Simple Object Access Protocol (SOAP), the standard for communicating the XML messages, provides the communications link between systems 407

PROVIDING APPLICATION SOLUTIONS and applications. Similar to HTML, SOAP has a clear distinction between headers and its body. To maintain the openness of the XML messages, SOAP transmits all information via HTTP, enabling it to pass seamlessly through firewalls while maintaining their integrity. This also allows Web services to be secured using the same tools that are used for current Web sites. SOAP also defines a standard for error representation and a standard binding to HTML. To invoke a SOAP operation, several pieces of information are needed, other than the XML Schema, such as the endpoint URL, the protocols supported, encoding, etc. WSDL Web Services Description Language (WSDL) is the XML-based specification for describing Web services. The WSDL describes what functions the Web service performs and how the communication operates. The WSDL is independent of any vendor and does not depend on a specific platform or programming language to ensure that it is as open a platform as possible. UDDI Universal Description, Discovery, and Integration (UDDI) is an industry standard set of protocols and a public directory for the registration and real-time lookup of available Web services. Companies that wish to make their Web services available register them with a UDDI Registry, and companies that wish to use the services are able to find them in this directory. UDDI is an industrywide initiative to standardize how Web services are discovered. It defines a SOAP-based API for querying centralized Web service repositories. UDDI makes it possible for developers to discover the technical details of a Web service (WSDL) as well as other business-oriented information and classifications. For example, using UDDI you should be able to query for all Web services related to stock quote information, implemented over HTTP, and that are free of charge. UDDI is to Web services what the registry is to DCOM. UDDI simply makes it possible to build an extra level of abstraction into your Web service so you do not have to hardcode the specific endpoint location at development time. This approach will work within a controlled environment like an Intranet or trusted business partners; in the real world, however, there are too many other issues (quality, legal, etc.) that will inhibit its use. UDDI still can be a useful tool for developers when they need to discover available Web services, either in their enterprise or when on the Web at development time. The power of Web services lies in the fact that they are self-contained, modular applications. They can be described, published, located, and called over the Internet. Web services architecture comprises three basic roles: provider, requester, and broker. The Internet has taught us that to be usable, applications need to be open to many platforms and operating sys408

Web Services: Extending Your Web tems. “Old-style” application integration involved building connectors that were specific to the device and applications. If a system needed to talk to two different applications, one would need two different connectors. For the next generation of E-business systems to be successful, they need to be flexible. Web services allow these systems to be comprised of many modular services that can mixed and matched as needed. These services are “loosely coupled,” which allows them to be changed or expanded at any time without affecting the system as a whole. As the rate of change increases in not only computers but also business, systems need to be flexible so that they do not have to be rewritten every time a subsystem is modified. Web services are only bound at runtime, thus allowing systems to grow and change independently of each other. The WSDL describes the capabilities of the Web service. This allows developers to create applications without knowing the details of how a Web service is architected. CHALLENGES SURROUNDING WEB SERVICES Web services technologies are not without their own issues. Imagine a bank creating a credit card Web service that it makes available to hundreds of business partners. What would happen if for some reason the bank’s system went down? None of its business partners would be able to process credit card payments. Testing and debugging applications that span operating systems and companies will prove to be a challenge; along with every live Web service, a “test mode” will have to be created for companies to use during development and testing. As the lines blur between control of services, there is a potential for mass finger-pointing when things do go wrong. Once the systems are deployed and in production, methods for communicating and approving changes to Web services need to be developed. There is also the issue of payment for using these services. A credit card service is fairly straightforward because it is a transaction charge based on each purchase. But what about a shopping cart service where it could be used many times without the shopper actually purchasing products? Do you charge monthly, per use, by bandwidth, or a combination of all of the above? Likely, various models will emerge and smart providers will be flexible as they seek to build a sustainable business model. Web services allow one to control the “silos of expertise.” If one utilizes a third-party shipping firm to deliver the goods sold on one’s Web site, why should one develop a shipment tracking tool? The shipping company has the expertise in its business to create this application and to modify it as its business changes. This would also be a value-added service for its business partners because once the shipping company develops its Web service, it can be used by any of its business partners to track shipments. 409

PROVIDING APPLICATION SOLUTIONS

Exhibit 3. Application Flow

PROVE IT … THE “HOTTUB” SAMPLE APPLICATION Everyone hears the promises about new technologies and software systems, but many of us have been jaded by years of software promises that do not deliver on the hype. The only way to prove these claims is to actually see it in action. We decided to build a prototype site based on the Web services model. The result was a project, code named “HotTub,” which among other things was meant to prove the concept of reusable Web services across multiple platforms. Exhibit 3 shows the application flow. To accomplish this task, our developers created two versions of an Ecommerce bookstore: one created using Microsoft’s .NET framework written in C#, and the other written in JSP (Java Server Pages) on an Apache Tomcat Web server. In the initial phase of the project, all Web services were written on the Microsoft platform in C# with a Microsoft SQL Server 2000 database. Also included, but not within the scope of this chapter, was backend legacy integration utilizing Microsoft’s BizTalk Server. After the initial planning and architecture was determined, the database was created and loaded with standard test data. The team then developed seven distinct Web services: Publisher List, Catalog Inquiry, Shopping Cart, Tax Calculator, Shipping Calculator, Credit Card Authorization, and Final Order Processing. Each service was planned to be distinct and generic to allow it to be used within a wide range of other applications (such as WAP phones, kiosks, interactive voice response, etc.) that could be developed in future phases of the project. The goal of this project was not only to explore Web services, but to testdrive the new Microsoft .NET framework and the C# language. Although 410

Web Services: Extending Your Web the developers on the team were traditionally VB/ASP coders, the fully integrated development environment of .NET helped them develop the Web services in rapid succession. Once the Web services were complete, the development of the .NET and JSP Java front-end versions was started in parallel. One of the first tasks for both streams was to consume the Publisher List Web service so that the Web site could allow the user to select from a dropdown box the publisher for which they would like to search. The .NET team was able to readily consume the Web service because it was “.NET” communicating with “.NET.” All the functionality was built in Visual Studio .NET to consume the Web service and to treat the data as any dataset that was read directly from a database. Once the correct versions of the Apache SOAP and XML Parser were installed, the JSP team was able to consume the exact same .NET Web service and display the resulting data. The JSP team took a different approach to data manipulation and used the opportunity to implement XSL transformations on the standard XML that was returned from the Web service. XSL is a very powerful XML styling language that can manipulate an XML file and return standard HTML. At this stage, the developers had proven that Web services were indeed cross-platform and cross-language compatible. All that was left to do was to create fully functional sites by implementing the remainder of the Web services that had been developed. Much of the site logic itself was contained in the Web services; for example, the Shopping Cart service contained all the logic to manage adding, updating, and deleting records from the shopping cart. This fact made the creation of the “Public” layer of the site progress very quick. All the development team had to worry about was passing the correct parameters to the Web service. As typically happens in any IT project during development, a bug was found in one of the Web services. Once the team member corrected the problem in the logic, the Web service was copied to the Web server and implemented without work on the remaining of the systems stopping to register the new object or restart the server. The development team was impressed by the ease of deployment of the .NET objects and was quite disappointed when the next project to which they were assigned forced them to return to standard ASP programming. LESSONS LEARNED It was discovered that Web services not only lived up to the hype and promise but also in some ways, with the tools available, exceeded them. By separating the database from the Web site, Web services allow the Web site to be hosted virtually anywhere on any platform. 411

PROVIDING APPLICATION SOLUTIONS Criticality of planning was one of the lessons learned. Major changes to Web services once they have been deployed could have a large impact on the sites consuming the Web services. For this reason, there will be cases where there are several versions of the “same” Web service deployed at the same time. Version control and service level definitions are important because clients will use “older” versions and upgrade when they see benefits to the most current service offered. The goal of not being tied to any one platform was met. Two systems were created that had the same functionality and consumed the same data yet were developed on totally separate operating systems, Web servers, and programming languages. This is not only a benefit on systems that cross the boundaries of companies, but is a great benefit for companies that are migrating from one system to another or integrating separate divisions that use different technology platforms. By forcing applications to be accessed through Web services, one ends up with a series of base functions. Once these are individually analyzed, they can be distributed to the experts in that field. A bank might produce a credit card Web service and FedEx might produce a package tracking service. This will save development time and probably add features to products that would not have been considered feasible using previous technologies. Security will be one of the key concerns when choosing companies with which to partner. Security can be tight but it has to be carefully planned if extending Web services to outside organizations. The good thing is that because Web services travel over http, they are very firewall friendly and can be secured using existing Web site methods. Once a core set of external Web services is created and commercially available or an organization has a library of its own services, the development time of new applications that can take advantage of these services is greatly reduced. The front end becomes the “mortar” that holds the Web service “bricks” together. If one takes the simple function of a tax calculator and multiplies the time it takes to create one with the number of times that different systems require this function, one can imagine how much time can be saved by having a standard tax Web service. Now imagine a situation where the tax rate changes. If all sites within a company used the same Web service, it would only have to be changed once inside the Web service without having to make changes to the actual Web site. TECHNOLOGIES EMPLOYED Microsoft’s .NET framework and the new language C# were built from the ground up for Internet development and more importantly for Web services. To change a typical C# component to a Web service meant the addi412

Web Services: Extending Your Web tion of only one line of code. The .NET framework took care of all the interfaces and plumbing required. Coding in .NET now provides access to an object model that most programmers thought was gone forever when they moved to the Web. There are now data grids and button properties that resemble the days of client/server programming but built with the Internet in mind. To display a table of records with alternating row colors and paging is now a matter of setting three or four properties instead of 20 lines of code in ASP or JSP. Although C# is a new language, it shares a large percentage of its syntax with C++, which makes it an easy transition for a C++ programmer. Visual Studio .NET is such a complete development environment that even VB and ASP programmers with no previous C++ experience can very rapidly develop complex applications using C#. Although the J2EE platform is also able to consume Web services (either built on Java or Microsoft .NET) natively with code that is available now, it was necessary to download various versions of add-in components to the Tomcat server to access the SOAP protocols and display the XML using XSL templates. The versions of these components were not always compatible; once the right combination was found, they worked very efficiently. The combination of SOAP Web services returning XML and XSL templates converting the XML into dynamic Web pages is very powerful. Although this approach is compatible across many systems, it is missing a fully integrated development environment like that of Microsoft Visual Studio .NET. SUMMARY While some of the technologies are new, the basis for Web services is solidly grounded in existing standards that will speed the adoption of this technology. Additional standards may have to be put in place before there is seamless business-to-business integration, but this is the first step on that road; in the next 18 months, Web services will emerge as the tool for extending Web sites into E-commerce collaboration engines.

413

This page intentionally left blank

Chapter 34

J2EE versus .NET: An Application Development Perspective V. Ramesh Arijit Sengupta

J2EE and .NET are two competing frameworks proposed by an alliance of companies led by Sun Microsystems and Microsoft, respectively, as platforms for application development on the Web using the object-oriented programming paradigm. Both J2EE and .NET are component-based frameworks, featuring a system-level abstraction of the machine (called “Java Virtual Machine” in J2EE and “Common Language Runtime” in .NET). They both provide extensive support for developing various kinds of applications, including support for Web services and independent distributed software components. This chapter examines the relative benefits and drawbacks of the two frameworks. A brief overview of each of the frameworks is given in the next section. We then examine the differences between the two frameworks along various dimensions. We first examine the capabilities of each of the above frameworks and then present some issues that decision makers need to consider when making the J2EE versus .NET decision. J2EE (JAVA PLATFORM ENTERPRISE EDITION) The J2EE framework (see Exhibit 1) provides a component-based model for enterprisewide development and deployment of software using the Java programming language. This platform defines the standard for developing applications using a multi-tier framework and simplifies the process of application development by building applications using modular compo0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

415

PROVIDING APPLICATION SOLUTIONS

$SSOLFDWLRQ &OLHQW &RQWDLQHU

-DYD(QWHUSULVH(GLWLRQ -((

$SSOHW &RQWDLQHU

(-%

&RQQHFWRU

-DYD,'/

-63

50,,,23

-'%&

-DYD0DLO

6HUYOHW

-7$

-06

(-%&RQWDLQHU

:HE&RQWDLQHU

$SSOHW

-1',

;0/

$SSOLFDWLRQ &OLHQW

-DYD6WDQGDUG(GLWLRQ -6( Exhibit 1. The J2EE Framework Adapted from Singh, R., “J2EE-Based Architecture and Design — The Best Practices,” http://www.nalandatech.com; accessed November 24, 2002, Nalanda Technologies.

nents and handling different levels of application behavior automatically without the necessity of complex code. J2EE is based on the Java 2 platform, and uses many of its features, such as “Write Once, Run Anywhere” portability, JDBC for database access, RMI/IIOP technology for interaction with other distributed resources, and a security model. The J2EE model is platform independent, with support for many different platforms, including Windows, different versions of UNIX, Linux, as well as different handheld operating systems. Several vendors support the J2EE platform, including Sun, IBM, BEA, and Oracle. In addition, there are a number of open source projects that have lent their support for various parts of the J2EE framework, including Jakarta (Apache) and JBoss. MICROSOFT .NET Microsoft .NET (see Exhibit 2) is a set of Microsoft software technologies for connecting information, systems, devices, and personnel. .NET is also built on small reusable components, and includes support for (1) different types of clients and devices that can communicate via XML, (2) XML Web services written by Microsoft and others that provide developers with ready-to-use application components, (3) server infrastructure that sup416

J2EE versus .NET: An Application Development Perspective

9%

&

&

-6FULSW

-

&RPPRQ/DQJXDJH6SHFLILFDWLRQ $631(7:HE6HUYLFHV DQG:HE)RUPV

:LQGRZV )RUPV

$'21(7'DWDDQG;0/ 1(7)UDPHZRUN%DVH&ODVVHV &RPPRQ/DQJXDJH5XQWLPH Exhibit 2.

9 L V X D O 6 W X G L R 1(7

The .NET framework and Visual Studio.NET

Adapted from Microsoft .NET framework Academic Resource Kit.

ports such applications, and (4) tools that allow developers to build such applications. At the server level, .NET is not platform-independent, because only Windows servers have support for .NET at the moment. However, .NET has support for many different programming languages in which components can be built, such as C#, Visual Basic, C++, JScript, and J# (a derivation of Java). A COMPARISON OF J2EE AND .NET This section examines the differences between the two frameworks in terms of the following characteristics: programming languages, Web applications, Web services, backwards compatibility, support for mobility, and marketing ability. Programming Languages (Advantage: Java) Exhibit 1 and Exhibit 2 show the various languages supported by J2EE and Microsoft .NET. The diagrams show that .NET allows programmers the flexibility of using many different languages. However, only some of these languages are considered “first-class citizens,” that is, are capable of taking full advantage of all the features of .NET. Programmers can mix-and-match such languages when creating their applications. The J2EE platform is based on a single language, Java. Both of these frameworks have adopted 417

PROVIDING APPLICATION SOLUTIONS the object-oriented paradigm. The primary languages in the .NET framework — VB.NET and C# — are both purely object oriented, and the same is, of course, true for Java. One of the biggest advantages of Java is the learning process that went into the design of the language. Java was a product of significant research and development rather than a competitive struggle to the top of the market. Java started its life as early as 1995, and right from the beginning it was a language designed with the Internet in mind and with device-independent modular software development as the background. Java is a very clean language, including the niceties of automatic garbage collection and independence from system-specific libraries that have hurt most of the other languages. The concept of a uniform “virtual machine” also implied that Java developers could develop in one platform and deploy the code in an entirely different platform that also supports the Java virtual machine (JVM). Microsoft’s advantage in this regard is the support for different languages in its .NET framework. The problem is that the primary language of choice for .NET is C#, which is a brand new and untested language. Because the Web development model is based on ASP — which in its earlier version was primarily built on top of Visual Basic — it is natural that most of the new applications developed in ASP.NET or applications that are upgraded to ASP.NET would continue to use Visual Basic (albeit the .NET version of this language). However, many organizations and developers do not perceive Visual Basic as a language for serious software development, and this can definitely work against Microsoft .NET. Given the diversity of languages supported by .NET, at face value, it might seem like the edge should go to Microsoft. However, an examination of what has happened at universities over the past five years indicates the difficulty that these languages are going to face in the future. In the late 1990s, universities were looking for a language that could be used to teach object-oriented principles. C++, despite its popularity in practice, was difficult to teach. When Java was introduced, universities had an alternative to C/C++ when it came to object-oriented education. As a result, Java has gained a lot of momentum at universities over the past five years. Consequently, a significant number of universities have, within the last couple of years, switched to teaching Java in their curriculum. The recency of the transition means that the universities will be reluctant to make another change to a new language like C# or VB.NET, unless there is a compelling reason. Given that both C# and VB.NET resemble Java considerably, it might be hard to find such a reason. Thus, it is likely that the future generation of programmers is going to be trained on Java. 418

J2EE versus .NET: An Application Development Perspective Another issue that might work against the .NET framework, and VB.NET in particular, is the amount of retraining it is going to take to convert Visual Basic programmers into Visual Basic.NET programmers. This is especially true for those programmers who wish to take advantage of the object-oriented features in the language. This retraining might slow down the rate at which .NET gets adopted within companies. Web Application Level (Advantage: Java) The primary objective of both the J2EE and .NET frameworks is to provide support for developing Web-enabled applications. These applications range from customer-centric E-commerce applications to businessto-business (B2B) applications. Modern Web applications are developed using a multi-tier model, where client devices, presentation logic, application logic, data access logic, and data storage are separated from each other. The biggest difference between the J2EE and .NET frameworks lies in the hardware and software choices available in the two frameworks. In essence, .NET is an integrated framework while J2EE is a framework that is integratable. Thus, using the J2EE framework allows organizations to, in theory, mix-and-match products from several vendors. For example, within this framework, one could use an Apache server with a BEA WebLogic application server that connects to an Oracle database server. At a later point, this same organization could replace the BEA WebLogic application server with a different product, such as IBM WebSphere, Oracle’s Application Server, or the free JBoss server. All of these applications can be run on top of several operating systems platforms, including UNIX, Linux, and Windows. With the .NET framework, an organization is essentially limited to systems software developed by Microsoft, including its IIS Web server. The integrated nature of the .NET framework might be considered an advantage for small and medium-sized applications that typically do not have the scalability needs of large applications. This is not to say that .NET cannot be used for developing large applications. However, we believe that the largest adopters of .NET will be the current small and medium-sized application developers that use ASP. However, it should be noted that even for these organizations, the shift to .NET will not be easy and will require significant retraining. For large applications that have significant scalability and security requirements, the J2EE framework provides the flexibility needed to achieve the desired architectural and performance goals. However, Sun’s recent slowness in keeping up with the standards and in coming up with development paradigms has put a dent in this progress. Also, the delay in 419

PROVIDING APPLICATION SOLUTIONS an integrated support for the Java Server pages and in support for the Web services standards has caused the development of many proprietary Javabased Web application extensions from companies such as Oracle, IBM, and BEA. This might affect the mix-and-match abilities that are an essential advantage of a framework. The maturity of the J2EE framework means that creating large-scale applications has become more a science than an art. The J2EE application servers manage most of the complexity associated with scalability, allowing programmers to focus on application development issues. In this regard, the existence of J2EE design patterns allows organizations to adopt the best practices and learn from them. As previously noted, two key issues that need to be addressed by all Web applications are scalability and security. Both frameworks rely heavily on server software — operating systems, Web servers, and database systems — and they seem to have an even share of success in this regard. The J2EE framework is operating system agnostic. The availability of cross-platform Web servers in Apache and database systems such as Oracle has certainly helped the cause of J2EE. In addition, the preferred choice for running J2EE has been UNIX-based platforms. The built-in reliability, scalability, and security of this platform has been one of the key reasons why the J2EE platform has been successful for developing large-scale applications. On the other hand, the fact that .NET runs only on Microsoft Windows and its associated Web server may be considered a disadvantage by many architects. Further, despite the advances, security holes are common in Windows. In addition, scaling a Windows-based system often means adding several additional machines, which in turn are more difficult to manage. However, an advantage that the .NET framework has is that the tight integration of the operating system with the Web and database servers in .NET can help the applications be more resource efficient and potentially hold the edge in performance, especially when a small number of servers are adequate to meet the business’ needs, (e.g., in the case of small and medium-sized applications). In summary, for the various organizations that have made a significant investment in J2EE-based applications, we do not see any reason for them to shift to the .NET framework. In the short term, we envision that the existing base of ASP-based applications is where .NET will make significant inroads. Web Services Level (Advantage: Microsoft) Web services is the latest buzz to take the software industry by storm. However, very few people truly understand what Web services are really all about. Web services represent a paradigm where systems developed in dif420

J2EE versus .NET: An Application Development Perspective ferent platforms can interoperate with each other. The key to this interoperability is a series of standards (all based in one form or the other on XML). Primary among these standards are SOAP (Simple Object Access Protocol), UDDI (Universal Description, Discovery, and Integration), and WSDL (Web Services Description Language). Both J2EE and .NET have more or less equal support for these Web services standards. The difference lies in the fact that, in .NET, support for XML is an integral part of the framework whereas at this point, in J2EE, XML support has to be “bolted on” (see Exhibit 1 and Exhibit 2). For example, in .NET, data retrieved from a database using the DataSet object in ADO.NET is actually stored in XML format. The J2EE alliance has been somewhat sluggish in getting these standards to be integrated into the framework. This has resulted in various Java-based application development companies creating their own proprietary methods based on Java for Web services. Further, Microsoft is one of the organizations playing a key role in the standards body developing these Web service standards. Their willingness and agility in incorporating these standards into .NET has given a temporary advantage to .NET in the Web services arena. It is, however, expected that eventually the standards will become an integral part of both frameworks. The ability for systems created using the two frameworks to interoperate will mean that companies may not need to switch frameworks in order for them to interoperate with systems internally or externally. This might work against .NET in the sense that organizations that have already committed to J2EE (due to its head start in the Web applications arena) will have even less of a compelling reason to switch to the .NET framework. Backwards Compatibility (Advantage: Java) Because VB.NET is a new object-oriented rewrite of the original Visual Basic language, very little of the old VB code is upgradeable to VB.NET. Applications written in legacy languages need to be migrated to the new .NET platform in order to utilize the full compatibility with the .NET technology. Similarly, the Web-based scripting language ASP (Active Server Pages) is replaced by ASP.NET, which has a radically different look and feel. Although Microsoft allows coexistence of both ASP and ASP.NET code on the same server, the interaction between them is not easy. The Microsoft Visual Studio product includes a migration wizard for moving existing Visual Basic code into VB.NET, although such tools are not foolproof and can only change a subset of all the existing code types. Sun Microsystems, on the other hand, has traditionally taken backwards compatibility quite seriously. Most newer versions of Java still include sup421

PROVIDING APPLICATION SOLUTIONS port for the older API, although such use is “deprecated” and not recommended. Given this, it is more likely that an organization that has applications written in Java is going to continue to use that language. However, organizations with a VB code base may not automatically move to VB.NET because of the significant amount of rewrite required. Indeed, this barrier might cause some organizations to reexamine their options and possibly cause some to switch to the J2EE environment. Support for Mobility (Advantage: Even) Microsoft has proposed a version of the .NET framework known as the .NET Compact Framework, which is a smaller version of the desktop/server .NET framework, and is designed to run on devices that support Microsoft’s mobile device operating systems such as Windows CE or Pocket PC 2002. This compact framework comes with full support for XML and ADO.NET. In addition, Microsoft has a toolkit for developing applications for its Pocket PC environment that was available at the same time as Visual Studio.NET. Sun has a version of the Java platform known as the J2ME (Java 2 Platform Micro Edition). J2ME has gained immense popularity on mobile devices in the past couple of years with leading vendors such as Nokia, Motorola, and others supporting Java on their mobile devices. In addition, there is also a virtual machine available for the Palm OS. We believe that the decision to use J2ME or the .NET Compact Framework (or both) will be primarily based on the types of devices that an organization wants to support. Marketing (Advantage: Microsoft) Although Sun had a head start in the process of developing its framework, Microsoft is quickly catching up, thanks to its fierce and aggressive marketing practices. Microsoft has created a highly known advantage with respect to the desktop operating system market, and is quickly closing gap in the server-level database market as well, thanks to its marketing strategy. Given its history with successfully marketing other products, one would definitely have to give the marketing advantage to Microsoft for promoting .NET. The J2EE side of the equation is hampered in this regard because the products are marketed by several companies, many of which are competing for the same market share. This may be the biggest threat facing the J2EE framework moving forward.

422

J2EE versus .NET: An Application Development Perspective SUMMARY Both Sun’s J2EE and Microsoft’s .NET frameworks have advantages and disadvantages that cannot be ignored. At the end of the day, the choice, however, should be based on the specific needs and characteristics of the organization making the decision. We do not see any clear advantage in switching to .NET if an organization is currently already committed to Java, or vice versa. Editor’s Note: See also Chapter 55, “At Your Service: .Net Redefines the Way Systems Interact.”

423

This page intentionally left blank

Chapter 35

XML: Information Interchange John van den Hoven

Today’s rapidly changing, global business environment requires an enterprise to optimize its value chain to reduce costs, reduce working capital, and deliver more value to its customers. The result is an ever-increasing demand for the efficient interchange of information, in the form of documents and data, between systems within an enterprise, and between the enterprise’s systems and those of its customers, suppliers, and business partners. The wide range of technologies, applications, and information sources in use today present the modern enterprise with an immense challenge to manage and work with these different data formats and systems. This internal challenge is further magnified by the efforts required to work with the different data formats and systems of other enterprises. Increasingly, eXtensible Markup Language (XML) is viewed as a key enabling standard for exchanging documents and data because of its ease of implementation and operational flexibility. It is now one of the key technology standards in a modern information systems architecture that enables an enterprise to be more flexible, responsive, and connected. XML OVERVIEW Definition The XML specification is defined by the World Wide Web Consortium (W3C). The eXtensible Markup Language 1.0 W3C Recommendation defines XML as follows: “Extensible Markup Language, abbreviated XML, describes a class of data objects called XML documents and partially describes the behavior of computer programs which process them. XML is an application profile or restricted form of SGML, the Standard Generalized Markup Language.”1 Examining the component parts of the term “eXtensible Markup Language” can further enhance the definition of XML. A markup language is a system of symbols and rules to identify structures in a document. XML is 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

425

PROVIDING APPLICATION SOLUTIONS considered to be very extensible because the markup symbols are unlimited and self-defining, allowing the language to be tailored to a wide variety of data exchange needs. Thus, XML is a system of unlimited and self-defining symbols and rules, which is used to identify structures in a document. Description XML is becoming a universal format for documents and data. It provides a file format for representing data and a Document Type Definition (DTD) or an XML Schema for describing the structure of the data (i.e., the names of the elements and the attributes that can be used, and how they fit together). XML is “a set of rules (you may think of them as guidelines or conventions) for designing text formats that let you structure your data.”2 “The XML syntax uses matching start and end tags, such as and , to mark up information. A piece of information marked by the presence of tags is called an element; elements may be further enriched by attaching name-value pairs (for example, country = “US”) called attributes. Its simple syntax is easy to process by machine, and has the attraction of remaining understandable to humans.”3 More information on XML can be found in the XML FAQ (Frequently Asked Questions) at http://www.ucc.ie/xml/. History and Context Hypertext Markup Language (HTML) was created in 1990 and is now widely used on the World Wide Web as a fixed language that tells a browser how to display data. XML became available in 1996 and a W3C standard in 1998 (revised in 2000) to solve HTML’s shortcomings in handling very large documents. XML is more complex than HTML because it is a metalanguage used to create markup languages while HTML is one of the languages that can be expressed using XML (XHTML is a “reformulation of HTML 4 in XML 1.0”). While HTML can be expressed using XML, XML itself is a streamlined, Web-enabled version of Standard Generalized Markup Language (SGML), which was developed in the early 1980s as the international standard metalanguage for markup. SGML became an International Organization for Standardization standard (ISO 8879) in 1986. It can be said that XML is an example of Pareto’s principle at work, in that it provides 80 percent of the benefit of SGML with 20 percent of the effort. VALUE AND USES OF XML Value XML can be a container for just about anything. It can be used to describe the contents of a very wide range of file types, including Web pages, busi426

XML: Information Interchange ness documents, spreadsheets, database files, address books, and graphics, to a very detailed level. This allows technology vendors, business users, and enterprises to use XML for anything where interoperability, commonality, and broad accessibility are important. Many technical and business benefits result from this flexibility. Technical Value. XML provides a common transport technology for moving documents and data around in a system-neutral format. Its key benefit is its ability to abstract data from specific technologies such as the processor that the data runs on, the database that manages the data, communication protocols that move the data, and the object models and programming languages that manipulate the data. By providing a common format for expressing data structure and content, XML enables applications and databases to interchange data without having to manage and interpret proprietary or incompatible data formats. This substantially reduces the need for custom programming, cumbersome data conversions, and having to deal with many of the technical details associated with the various technical infrastructures and applications in use today.

XML is also the foundation for many key emerging standards such as the Web services standards. Web services standards such as SOAP (Simple Object Access Protocol) for invoking Web services, WSDL (Web Services Description Language) for describing Web services, and UDDI (Universal Description, Discovery, and Integration) for registering and discovering Web services are key emerging technology standards that rely on XML as their foundation. With its capabilities for bridging different technologies, XML will disrupt many technology markets, including enterprise application integration, business-to-business integration, application servers, personal productivity software (such as word processing), messaging, publishing, content management, portals, and application development. The use of XML will enable several of these markets to converge. It will also enable proprietary data formats to be replaced, resulting in greater interoperability among the various applications and technologies. Business Value. The impact of XML can be compared to that of the Rosetta Stone. The Rosetta Stone was discovered in Egypt in the late 18th century, inscribed with ancient Egyptian hieroglyphics and a translation of them in Greek. The stone proved to be the key to understanding Egyptian writing. It represents the “translation” of “silent” symbols into a living language, which is necessary to make these symbols meaningful.

The interfaces used in today’s enterprises have become the modern form of hieroglyphics. XML promises to play a similar role to that of the Rosetta Stone by enabling a better understanding of these modern hiero427

PROVIDING APPLICATION SOLUTIONS glyphics and in making the content of the data in these interfaces understandable to many more systems. The business value of XML can be better understood by examining the major ways in which it is currently being used with documents and data in the enterprise. Uses There are two major classes of XML applications: documents and data. XML is becoming a universal format for both documents and data because of its capabilities for data exchange and information presentation. In terms of documents, XML is derived from SGML, which was originally designed for electronic publishing, and electronic publishing remains one of the main uses of XML today. In terms of data, XML is widely used as a data exchange mechanism. XML also enables more efficient and effective Web searching for both documents and data. Electronic Publishing. Electronic publishing focuses on enabling the presentation of the content of documents in many different forms. XML is particularly well-suited for internationalized, media-independent, electronic publishing.

XML provides a standardized format that separates information content from presentation, allowing publishers of information to “write once and publish everywhere.” XML defines the structure and content (independent of its final form), and then a stylesheet is applied to it to define the presentation into an electronic or printed form. These stylesheets are defined using the eXtensible Stylesheet Language (XSL) associated with XML to format the content automatically for various users and devices. Different stylesheets, each conforming to the XSL standard, can be used to provide multiple views of the same XML data for different users. This enables a customized interface to be presented to each user based on their preferences. XML supports Unicode, which enables the display and exchange of content in most of the world’s languages, supporting even greater customization and globalization. Through the use of XML and XSL, information can be displayed the way the information user wants it, making the content richer, easier to use, and more useful. XML will also be increasingly used to target this media-independent content to new devices, thus creating new delivery channels for information. Wireless Markup Language and VoiceXML are examples of XML-based languages that enable the delivery of information to a much wider range of devices. XML will be used in an ever-increasing range of devices, including Web browsers, set-top boxes for televisions, personal digital assistants such as the Palm™ and RIM Wireless Handheld™, iPAQ™ Pocket PC, digi428

XML: Information Interchange tal cell phones, and pagers. XML will do for data and documents what Java has done for programs — make the data both platform independent and vendor independent. Data Exchange. Any individual, group of individuals, or enterprise that wants to share data in a consistent way can use XML. XML enables automated data exchange without requiring substantial custom programming. It is far more efficient than e-mail, fax, phone, and the customized interface methods that most enterprises are using today to work with their customers, suppliers, and business partners. XML will simplify data exchange within and between enterprises by eliminating these costly, cumbersome, and error-prone methods.

Within an enterprise, XML can be used to exchange data between individuals, departments, and the applications supporting these individuals and departments. It can also be used to derive greater value from legacy applications and data sources by making the data in these applications easier to access, share, and exchange. This is especially important to facilitate: data warehousing to allow access to the large volumes of legacy data in enterprises today; electronic commerce applications which must work with existing applications and their data formats; and Web access to legacy data. One of the greatest areas of potential benefit for XML is as a means for enterprises with different information systems to communicate with one another. XML can facilitate the exchange of data across enterprise boundaries to support business-to-business (B2B) communications. XML-based EDI (electronic data interchange) standards (XML is complementary to EDI because EDI data can travel inside XML) are extending the use of EDI to smaller enterprises because of XML’s greater flexibility and ease of implementation. XML is transforming data exchange within industries and within supply chains through the definition of XML-based, platform-independent protocols for the exchange of data. XML is a flexible, low-cost, and common container for data being passed between systems, making it easier to transmit and share data across the Web. XML provides a richer, more flexible format than the more cumbersome and error-prone file formats currently in use such as fixed-length messages, comma-delimited files, and other flat file formats. XML opens up enterprise data so that it can be more easily shared with customers, suppliers, and business partners, thereby enabling higher quality, more timely, and more efficient interactions. Web Searching. The use of XML also greatly enhances the ability to search for information in both documents and data. XML does this because documents and data that include metadata (data about data) are more eas429

PROVIDING APPLICATION SOLUTIONS ily searched — the metadata can be used to pinpoint the information required. For example, metadata can be used as keywords for improved searching over the full-text searches prevalent today. As a result, XML makes the retrieval of documents and data much faster and more accurate than it is now. The need for better searching capabilities is especially evident to those searching for information among the masses of documents and data inside the enterprise and the overwhelming volume of documents and data available on the Internet today. XML makes it easier to search and combine information from both documents and data, whether these originate from within the enterprise or from the World Wide Web. As a result, the enterprise’s structured data from its databases can be brought together with its unstructured data in the form of documents, and be linked to external data and documents to yield new insights and efficiencies. This will become even more important as business users demand seamless access to all relevant information on topics of interest to them and as the volume of data and documents continues to grow at a rapid rate. XML STANDARDS The good thing about standards is that there are so many to choose from and XML standards are no exception. XML standards include technology standards and business vocabulary standards. Technology Standards The XML 1.0 specification provides the technology foundation for many technology standards. XML and XHTML define structured documents, and DTDs and XML Schemas establish the rules governing those documents. The XML Schema provides greater capabilities than a DTD for defining the structure, content, and semantics of XML documents by adding data types to XML data fields. XML Schema will become an essential part of the way enterprises exchange data by enabling cross-enterprise XML document exchange and verification. Using an XML Schema will allow enterprises to verify the data by adding checks such as ensuring that XML files are not missing data, that the data is properly formatted, and that the data conforms to the expected values. Other technology standards are being developed and deployed to extend the value of XML. These include standards for (1) transformations — XSL Transformation for converting XML data from one XML structure to another or for converting XML to HTML; (2) stylesheets — eXtensible Stylesheet Language (XSL) is a pure formatting language that describes how a document should be displayed or printed; and (3) programming — Document Object Model (DOM) is a standard set of function calls for 430

XML: Information Interchange manipulating XML and HTML files from a programming language. The XML “family of technologies” is continuously expanding. Business Standards The other area of XML standards is vocabulary standards. XML is a metalanguage (a language for describing other languages) used to define other domain- or industry-specific languages that describe data. “XML allows groups of people or organizations to create their own customized markup applications for exchanging information in their domain.”4 These XML vocabulary standards are being created at a rapid pace as vendors and users try to establish standards for their own enterprises, for industries, or as general-purpose standards. XML allows industries, supply chains, and any other group that needs to work together to define protocols or vocabularies for the exchange of data. They do this by working together to create common DTDs or XML Schemas. These result in exchangeable business documents such as purchase orders, invoices, and advanced shipping notices, which taken together form a language of its own. XML is undergoing rapid innovation and experiencing widespread adoption, resulting in many horizontal (across many industries) and vertical (within an industry) vocabularies, and the development of a common messaging infrastructure. A horizontal vocabulary minimizes the need to interact with multiple vocabularies that are each focused on specific industries or domains, and which cannot easily talk to each other. Common XML vocabularies for conducting business-to-business commerce include OAGIS, ebXML, and EDI/XML. The Open Application Group’s Integration Specification (OAGIS) defines over 200 XML documents to provide a broad set of cross-industry business objects that are being used for application-to-application data exchange internally, and business-to-business data exchange externally. The Organization for the Advancement of Structure Information Standards (OASIS) and United Nations Centre for Trade Facilitation and Electronic Business (UN/CEFACT) have defined an E-business XML (ebXML) business content standard for exchanging business data across a wide range of industries. EDI has been used to exchange business information for the past couple of decades. The XML/EDI group (http://www.xmledigroup.org/) has created a standard that enables EDI documents to be the payload within an XML document. There are also many vertical vocabularies for a wide range of industries (and in many cases several within an industry). The finance, technology, and healthcare industries are prominent areas where much XML vocabulary development is taking place. In finance, the Financial Products Markup Language (FpML) and Financial Information eXchange Markup Language 431

PROVIDING APPLICATION SOLUTIONS (FIXML) are being established as standards for exchanging contracts and information about transactions. In technology, the RosettaNet™ project is creating a standardized vocabulary and defining an industry framework to define how XML documents and data are assembled and exchanged. In healthcare, the Health Level 7 (HL7®) Committee is creating a standard document format, based on XML, for exchanging patient information between healthcare organizations such as hospitals, labs, and healthcare practitioners. These are but a few of the hundreds of vertical standards being created across a wide range of industries. More details on these various industry vocabularies can be found at xml.org (http://www.xml.org). In addition to the ebXML horizontal vocabulary for business content, ebXML also defines a common messaging infrastructure. The ebXML Messaging Services Specification is used by many horizontal (e.g., OAGIS) and vertical (e.g., RosettaNet) business vocabulary standards. The ebXML messaging infrastructure includes a message transport layer (for moving XML data between trading partners), a registry/repository (which contains business process sequences for message exchanges, common data objects, trading partner agreements, and company profiles), and security (for authenticating the other parties). More information on ebXML can be found at www.ebxml.org. CONCLUSION XML is one of the key technology standards in a modern information systems architecture for information interchange. The flexibility of XML enables it to be a container for a very wide range of content resulting in many business and technical benefits because of its simplicity, interoperability, commonality, and broad accessibility. XML enables different enterprises using different applications, technologies, and terminologies to share and exchange data and documents. XML has been widely applied to electronic publishing, data exchange, and Web searching. In electronic publishing, XML facilitates the customization of data and documents for individual needs, broadens the range of information presentation and distribution options, and enables the global distribution of information. In data exchange, XML is emerging as a key integrating mechanism within the enterprise and between the enterprise and its customers, suppliers, and business partners. In Web searching, XML greatly enhances the ability to search for information in documents and data, and to combine the results. As Nelson Jackson has said, “I do not believe you can do today’s job with yesterday’s methods and be in business tomorrow.” Yesterday’s methods for handling business documents and data are no longer meeting today’s business needs, let alone positioning the enterprise to meet the challenges 432

XML: Information Interchange of tomorrow. XML has emerged as the new method for handling business documents and data in a way that increases the reach, range, depth, and speed of information interchange within and between enterprises. References 1. Extensible Markup Language (XML) 1.0 (Second Edition) W3C Recommendation 6 October 2000, Page 4. http://www.w3.org. 2. XML in 10 points is an essay by Bert Bos that covers the basics of XML, Page 1. http://www.w3.org. 3. Extensible Markup Language (XML) Activity Statement, Page 2. http://www.w3.org. 4. The XML FAQ, Page 9. http://www.ucc.ie.

433

This page intentionally left blank

Chapter 36

Software Agent Orientation: A New Paradigm Roberto Vinaja Sumit Sircar

Software agent technology, which started as a new development in the field of artificial intelligence, has evolved to become a versatile technology used in numerous areas. The goal of this chapter is to review the general characteristics of software agents and describe some of the most important applications of this technology. One of the earliest and most accepted definitions is the one by Wooldridge1 in the February 1996 issue of The Knowledge Engineering Review. An agent is “an autonomous, self-contained, reactive, proactive computer system, typically with central locus of control that is able to communicate with other agents via some Agent Communication Language.” However, this is just one among literally dozens of definitions. There is no common definition of an agent, even after almost a decade of continuous developments in this area. In fact, there might never be a common definition because the real power of agents does not reside in any specific application. The real power resides in the fact that agents are not specific applications but a paradigm for software development. The idea that started as a breakthrough development in artificial intelligence has become a new paradigm being applied in a wide range of systems and applications. There is an urgent need for a standard set of agent protocols to facilitate interaction of several agents across platforms. The “discovery” of the structured programming concept revolutionized the development of applications. More recently, the object-oriented paradigm provided a higher abstraction level and a radically different approach to application development. Concepts such as encapsulation and polymorphism transformed the conceptualization and development of systems. 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

435

PROVIDING APPLICATION SOLUTIONS The agent-orientation approach is perhaps the next paradigm. Although many of the concepts and principles associated with agents (such as mobility, intelligence, and purposeful planning) are not new, the overall agent approach provides a totally new perspective. According to Diep and Massotte, with the Ecole des Mines d’Ales, France, agents are the next step in the evolution of programming languages and an innovative paradigm.2 In his book Future Edge, Joel Barker says that every paradigm will uncover problems it cannot solve, and these unsolvable problems trigger a paradigm shift.3 How do we identify a paradigm shift? According to Barker, a paradigm shift occurs when a number of problems that cannot be solved using the current paradigm are identified. Object orientation has been very successful in solving a wide variety of problems. However, there are some development problems for which the object-oriented approach has not been sufficiently flexible. This is the reason software engineers and developers have turned to a new approach. Agents have properties such as intelligence and autonomy, which objects do not have. Agents can negotiate, can be mobile, and still possess many of the characteristics of objects. The agent paradigm combines concepts from artificial intelligence, expert systems, and object orientation. Knowledge-based systems, like expert systems, have been applied in many different areas. Similarly, the agent paradigm is becoming pervasive. Agents have several similarities with objects and go one step further. Like objects, agents are able to communicate via messages. They have data and methods that act on that data, but also have beliefs, commitments, and goals.4 Agents can encapsulate rules and planning. They are adaptive and, unlike objects, they have a certain level of autonomy. According to Wagner, agent orientation is a powerful new paradigm in computing.5 He states that agent-oriented concepts and techniques could well be the foundations for the next generation of mainstream information systems. Wagner also affirms that the agent concept might also become a new fundamental concept for dealing with complex artificial phenomena, in the same manner as the concept of entity in data modeling and object in object orientation. Other authors such as O’Malley and DeLoach6 have described the advantages of the agent-oriented paradigm in some detail; Wooldridge7 has proposed a agent-based methodology for analysis and design; and Debenham and Henderson-Sellers8 with the University of Technology, Sydney, Australia, have proposed a full life-cycle methodology for agent-oriented systems. BACKGROUND In the past few years, there has been a revolution in the tools for organizational decision making. The explosive growth of the Internet and the World 436

Software Agent Orientation: A New Paradigm Wide Web has encouraged the development and spread of technologies based on database models and artificial intelligence. Organizational information structures have been dramatically reshaped by these new technologies. The old picture of the organizational decision-making environment with tools such as relational databases, querying tools, e-mail, decision support systems, and expert systems must be reshaped to incorporate these new technologies. Some examples follow: • The browser has become the universal interface for accessing all kinds of information resources. Users can access not just Web pages, but also multimedia files, local files, and streaming audio and video using a program such as Internet Explorer or Netscape Navigator. Microsoft has tried to integrate Internet Explorer and the Windows operating system. Netscape has tried to embed operating system capabilities to its Navigator program. The ubiquitous use of the browser as the common interface for multiple resources has changed the way users access systems. • The proliferation of data warehouses. Organizations are consolidating information stored in multiple databases in multidimensional repositories called data warehouses. Firms are strategically using data mining tools to enable better decision making. However, some data warehouses are so huge that it would be almost impossible for a user to obtain the relevant information without the aid of some “intelligent” technology. • The development and growth of the Internet and the World Wide Web. A decade ago, the Internet was the realm of academics and scientists;9 now it is the public network of networks. The volume of information is growing at an exponential rate. Users need navigation and information retrieval tools to be able to locate relevant information and avoid information overload.10 • The establishment of Internet-based interorganizational systems. Many organizations are capitalizing on the use of the Internet as a backbone for the creation of extranets. Groupware-based tools such as Lotus Notes facilitate sharing data and workflow. Considering these dramatic changes in the information systems landscape, the need for a new paradigm for development should be apparent. Agent orientation is perhaps the most viable candidate for this urgent need. ATTRIBUTES OF AGENTS It is quite common in artificial intelligence (AI) to characterize an agent using human attributes, such as knowledge, belief, intention, and obligation. Some AI researchers have gone further and considered emotional agents.11 Another way of giving agents human-like attributes is to represent 437

PROVIDING APPLICATION SOLUTIONS them visually using techniques such as a cartoon-like graphical icon or an animated face.12 Examples of agents with a graphical interface are the Microsoft help agents. Research into this matter has shown that although agents are pieces of software code, people like to deal with them as if they were dealing with other people. Agents have some special properties but miscommunication has distorted and exaggerated them, causing unrealistic expectations. Intelligence What exactly makes an agent “intelligent” is something that is difficult to define. It has been the subject of much discussion in the AI field, and a clear answer has not yet been found. Allen Newell13 defines intelligence as “the degree to which a system approximates a knowledge-level system.” Intelligence is defined as the ability to bring all the knowledge a system has at its disposal to bear in the solution of a problem (which is synonymous with goal achievement). A practical definition that has been used for artificial intelligence is “attempting to build artificial systems that will perform better on tasks that humans currently do better.” Thus, tasks such as number addition are not artificial intelligence because computers easily do this task better than humans do. However, voice recognition is artificial intelligence because it has been very difficult to get computers to perform even the most basic tasks. Obviously, these definitions are not the only ones acceptable but they do capture the nature of AI. Autonomy Autonomy refers to the principle that agents can operate on their own without the need for human guidance.7 Self-regulated agents are goal-governed agents that, given a certain goal, are able to achieve it by themselves.14 Cooperation To cooperate, agents need to possess a social ability, that is, the ability to interact with other agents and possibly humans via some communication language.7,15 Agents may be complex or simple, and either work alone or in harmony, creating a multi-agent environment. Each agent or group of agents has knowledge about itself and about other agents.16 The agent should have a “language” or some way to communicate with the human user and with other agents as well.17 The nature of agent-to-agent communication is certainly different from the nature of human-to-agent communication, and this difference calls for different approaches.18 438

Software Agent Orientation: A New Paradigm Openness An open system is one that relates, interacts, and communicates with other systems.19 Software agents as a special class of open systems have unique properties of their own but they share other properties in common with all open systems. These include the exchange of information with the environment and feedback.20 An agent should have the means to deal with its software environment and interact with the world (especially with other agents). Bounded Rationality Herbert Simon21 proposed the notion of bounded rationality to refer to the limitations in the individual’s inherent capabilities of comprehending and comparing more than a few alternatives at a time. Humans are not optimal and only in some cases locally optimal. Just like humans have limitations, agents also have limitations. The bounded rationality concept can also be applied to describe the behavior of an agent that is nearly optimal with respect to its goals, as its resources will allow. Because of limited resources, full rationality may not always be possible even when an agent has the general capability to act. This is known as bounded rationality. For example, an agent for price comparison might not be able to search for every single online store on the World Wide Web and locate the true lowest price for a product. Even if the agent had the capability to do so, there is also a time limitation; users might not want to wait for an unusual amount of time. Nevertheless, users can be satisfied with a solution, which is nearly optimal and sufficient instead of the optimal solution. Most agents that use heuristic techniques for problem solving can reach acceptable (nonoptimal) solutions in a reasonable amount of time. Purposiveness The most distinctive characteristic of the behavior of higher organisms is their goal-directness, that is, their apparent purposiveness.22 Purposeful behavior is that which is directed toward the attainment of a goal or final state.23 An agent is something that satisfies a goal or set of goals.24 That is, a degree of reasoning is applied to the data that is available to guide the gathering of extra information to achieve the goals that have been set. Purposeful behavior pertains to systems that can decide how they are going to behave. Intentional attitudes, usually attributed to humans, are also a characteristic of agents.17 Human Interaction and Anthropomorphism An agent is a program that interacts with and assists an end user. There are many viewpoints concerning the form of interaction between an agent and 439

PROVIDING APPLICATION SOLUTIONS a human. Rapid development of networks and information processing by computer now makes it possible for large quantities of personal information to be acquired, exchanged, stored, and matched very quickly. More and more activities are becoming computer based and, as computers spread, new users are beginning to take advantage of the computer. The volume of information now available on the “information superhighway” is overwhelming, and these users need help to handle this information overload. In the past, the computer user stereotype was a sophisticated, professional, technically oriented person. Nowadays, however, the typical user is less sophisticated and user characteristics encompass different educational levels, different ages, and diverse cultural backgrounds. This requires a new paradigm of human–computer interaction that can handle the inherent complexity of user–computer interaction. The present paradigm is one in which the user directly manipulates the interface. In the past, the user used to request an action by issuing a command; an important improvement has been achieved with the implementation of graphical user interfaces (GUIs) such as in Microsoft Windows. GUIs are more intuitive and user-friendly; nevertheless, the user still has to initiate manipulation by clicking on a graphic or a hyperlink. Another important notion is the fact that the agent should have external indicators of its internal state. The user needs to visualize the response of the agent based on its external features. Should agents use facial expressions and other means of personification? The use of facial expressions or gestures can indicate the state of the agent. This is called anthropomorphism. There is much debate over anthropomorphism in agent user interface design (utilizing a human-like character interface). Some designers think that providing an interface that gives the computer a more human appearance can ensure that a computer novice feels comfortable with the computer. On the other hand, some critics say that an anthropomorphic interface may be deceptive and misleading. The use of human-like agents for an interface was pioneered by Apple Computer. The promotional video “The Knowledge Navigator,” produced by former chairman John Sculley, features an agent called Phil. Phil plays the roles of resource manager, tutor, and knowledge retrieval agent. However, the future vision depicted in this video is still utopian and impossible to achieve with existing technology. It is quite common in AI to characterize an agent using human attributes, such as knowledge, beliefs, intention, and obligation. Another way of giving agents human-like attributes is to represent them visually by using techniques such as a cartoon-like graphical icon or an animated face. 12 Research into this matter has shown that although agents are pieces of software code, people like to deal with them as if they were dealing with other people.25 440

Software Agent Orientation: A New Paradigm Whenever one interacts with some other entity, whether that entity is human or electronic, the interaction goes better if one’s expectations match reality. Therefore, when a problem needs to be solved, the user may not trust the agent enough to delegate important tasks. If the agent interface is sloppy, the user may perceive the agent as incapable of performing at a satisfactory level, and may be reluctant to delegate a task. Going to the other extreme is also dangerous. The optimum balance should be achieved between the level of autonomy and the degree of user control. By the very nature of delegation, assuming perfect performance, especially in a changing world where goals may be constantly changing, is likely to lead to disappointment. Users’ expectations are very important in making the agent useful. The problem arises from the fact that people tend to assign human attributes to a system personified as a human character. Contributing to the same problem is the tendency of researchers and marketers to advertise their products as human characters just for the sake of sales. Adaptation Adaptation is defined as the ability to react to the environment in a way that is favorable, in some way, to the continued operation of the system.26 Autonomous agents are software components with some ability to understand their environment and react to it without detailed instructions. The agent must have some mechanism to perceive signals from its environment. The environment is constantly changing and modified by the user interaction. The agent should be able to adapt its behavior and continue toward the desired goal. It should be capable of constantly improving skills, adapting to changes in the world, and learning new information. Furthermore, the agent should be able to adapt to unexpected situations in the environment, and be able to recover and perform an “adequate” response.27 Learning Interface agents are software programs that assist a user to perform certain specific tasks. These agents can learn by interacting with the user or with other agents. The agent should be able to learn from its experience. Because people do not all do the same tasks, and even those who share the same task do it in different ways, an agent must be trained in the task and how to do it.28 Ideally, the structure of the agent should incorporate certain components of learning and memory. This is related to heuristics and cybernetic behavior. Jon Cunnyngham29 has developed a hierarchy for understanding intelligence, the learning hierarchy, where at each step something is added to the learning mechanisms already at hand. The hierarchy has four levels of learning: 441

PROVIDING APPLICATION SOLUTIONS 1. 2. 3. 4.

Learning by discovery Learning by seeing samples Learning by being told Learning by being programmed

Patti Maes12 has addressed the problem of agent training based on the learning approach of a real human assistant. In addition, she has proposed a learning model highly compatible with Cunnyngham’s hierarchy. When a personal assistant is hired, he is not familiar with the habits and preferences of his employer, and he cannot help much. As the assistant learns from observation and practices repetitively, he becomes knowledgeable about the procedures and methods used in the office. The assistant can learn in several ways: by observation, by receiving instructions from the boss, by learning from more experienced assistants, and also by trial and error. As the assistant gains skills, the boss can delegate more and more tasks to him. The agent has four learning sources: imitation, feedback, examples, and agent interaction. 1. Observing and imitating the user. The agent can learn by observing a repetitive behavior of the user over long periods of time. The agent monitors the user activity and detects any recurrent pattern and incorporates this action as a rule in the knowledge base. 2. Direct and indirect user feedback. The user rates the agent’s behavior or the agent’s suggestions. The agent then modifies the weights assigned to different behaviors and corrects its performance in the next attempt. The Web agent Firefly will choose certain Web sites that may be of interest based on one’s personal preferences. The agent asks to rate each one of the suggestions, and these ratings serve as an explicit feedback signal that modifies the internal weights of the agent. 3. Receiving explicit instructions from the user. The user can train the agent by giving it hypothetical examples of events and situations, and telling the agent what to do in those cases. The interface agent records the actions, tracks relationships among objects, and changes its example base to incorporate the example that it is shown. Letizia,30 a Web browser-based agent, collects information about the user’s behavior and tries to anticipate additional sites that might be of interest. 4. Advice from other agents. According to Maes, if an agent does not itself know what action is appropriate in a certain situation, it can present the situation to other agents and ask “what action they recommend for that situation.” For example, if one person in the organization is an expert in the use of a particular piece of software, then 442

Software Agent Orientation: A New Paradigm other users can instruct their agents to accept advice about that software from the agent of that expert user. Mobility The mobile agent concept encompasses three areas: artificial intelligence, networking, and operating systems.31 Mobile agents are somewhat more efficient models32 and consume fewer network resources than traditional code because the agent moves the computation to the data, rather than the data to the computation. Java applets can be accessed from any terminal with Internet access. To execute an applet, only a Java-enabled browser is required, such as the already widely used browser programs. Some examples of mobile agents are D’Agents,33 designed at Dartmouth, and IBM’s Aglets. Applications of Agents Agent technology has tremendous potential. Most agents are still in the prototype stage although there are a growing number of commercial applications. Most agent implementations are part of another application rather than stand-alone agents. This section focuses on how agents impact the other technologies, namely, how they support other applications. E-Mail Several agent applications have been developed for e-mail filtering and routing. An agent can help managers classify incoming mail based on the user’s specifications. For example, he could specify that all incoming mail with the word “Confirmation” be stored in a folder with lower priority. Also, the agent can learn that a user assigns a higher priority to mail personally addressed than mail received from a subscription list. After the user specifies a set of rules, the agent can use those rules to forward, send, or file the mail. It is true that e-mail programs already use filtering rules for handling and sorting incoming mail. However, an agent can provide additional support. An artificially intelligent e-mail agent, for example, might know that all requests for information are handled by an assistant, and that a message containing the words “request information” is asking for a certain information envelope. As a result, the agent will deduce that it should forward a copy of the message to the assistant. An electronic mail agent developed by Patti Maes12 is an excellent example of a stationary, limited scope agent, operating only on the user’s workstation and only upon the incoming mail queue for that single user. This agent can continuously watch a person’s actions and automate any regular patterns it detects. An e-mail agent could learn by observation that the user always forwards a copy of a message containing the words “request 443

PROVIDING APPLICATION SOLUTIONS for information” to an assistant, and might then offer to do so automatically. Data Warehousing Data warehouses have made available increasing volumes of data (some data warehouses reach the terabyte size) that need to be handled in an intuitive and innovative way. While transaction-oriented databases capture information about the daily operations of an organization, the data warehouse is a time-independent, relevant snapshot of the data. Although several tools such as online analytic processing (OLAP) help managers to analyze the information, there is so much data that, in fact, the availability of such an amount of information actually reduces, instead of enhances, their decision-making capabilities. Recently, agents have become a critical component of many OLAP and relational online analytic processing (ROLAP) products. Software agents can be used to search for changes in the data and identify patterns, all of which can be brought to the attention of the executive. Users can perform ad hoc queries and generate multiple views of the data. Internet-Based Applications As the World Wide Web grows in scale and complexity, it will be increasingly difficult for end users to track information relevant to their interests. The number of Internet users is growing exponentially. In the early years of the Internet, most of its users were researchers. Presently, most new users are computer novices. These new users are only partially familiar with the possibilities and techniques of the Internet. Another important trend is that more and more companies, governments, and nonprofit organizations are offering services and information on the Internet. However, several factors have hindered the use of Internet for organizational decision making, including: • The information on the Internet is located on many servers all over the world, and it is offered in different formats. • The variety of services provided in the marketspace is constantly growing. • The reliability of Web servers is unpredictable. The service speed of a Web server depends on the number of requests or the nature of the request. • Information is highly volatile, Web pages are dynamic and constantly change. Information that is accessible one day may move or vanish the next day. These factors make it difficult for a single person to collect, filter, and integrate information for decision making. Furthermore, traditional informa444

Software Agent Orientation: A New Paradigm tion systems lack the ability to address this challenge. Internet-based agents can help in this regard because they are excellent tools for information retrieval. Information Retrieval Search engines feature indices that are automatically compiled by computer programs, such as robots and spiders,34 which go out over the Internet to discover and collect Internet resources. However, search engines might not be optimal in every single case. The exponential growth in the number of Web pages is impacting the performance of search engines based on indexes and subject hierarchies. Some other problems derived from search engines’ blind indexing are inefficiency and the inability to use natural languages. A superior solution is the combination of search engines and information retrieval agents. Agents may interact with other agents when conducting a search. This will increase query performance and increase precision and recall. A user agent can perform a search on behalf of the user and operate continuously. This will save valuable time for the user and increase the efficient use of computer resources. Examples of search agents include: • The Internet Softbot, developed at the University of Washington under the direction of Oren Etzioni,35 is one of the first agents allowing adaptation to changes in the environment. The Softbot is based on the following main objectives: — Goal oriented: the user specifies what to find and the agent decides on how and when to find it. — Charitable: the Softbot tries to understand the request as a hint. — Balanced: the Softbot considers the trade-off between searching on its own, or getting more specific information from the user. — Integrated: this program serves as a common interface to most Internet services. • MetaCrawler36 is a software robot, also developed at the University of Washington, that aggregates Web search services for users. MetaCrawler presents users with a single unified interface. Users enter queries, and MetaCrawler forwards those queries in parallel to the search services. MetaCrawler then collates the results and ranks them into a single list, returning a consolidated report to the user that integrates information from multiple search services. Electronic Commerce Agents are a strategic tool for electronic commerce because of their negotiation and mobility characteristics. For example, Intershop Research is a company that has developed agents customized for electronic commerce transactions.37 Agents are sent to E-marketplaces on behalf of buyers and 445

PROVIDING APPLICATION SOLUTIONS sellers. These agents have a certain level of autonomous decision-making and mobility capabilities. They can proactively monitor trading opportunities, search for trading partners and products, and make trading decisions to fulfill users’ objectives and preferences based on the users’ trading rules and constraints. Mobile agents can move to the E-marketplace through the Internet and can be initiated from different computer platforms and mobile devices such as mobile phones and PDAs. The agents can interact with a number of other participants in an E-marketplace and visit other E-marketplaces, if required. Agents can be used to provide customer service and product information in online markets. Well-known companies such as Procter&Gamble and Coca-Cola have implemented software agents for customer service at their sites. The agent attempts to answer customer questions by identifying a keyword in the sentence and matching the keyword against a database of possible answers. If the agent is not successful in identifying a keyword, it will ask the user to restate the question. If the second attempt fails, it will provide the customer with a list of frequently asked questions (FAQs). There are also agent applications for business-to-business electronic commerce transactions. They provide more sophisticated negotiation protocols, can manage bidding systems, and handle RFQs (Request for Quotations) or RFPs (Request for Proposals). For example, SmartProcurement (developed by the National Institute for Standards and Technology and Enterprise Integration Technologies) uses agents to facilitate procurement.38 The system is based on CommerceNet technology. It includes a series of databases with supplier and transaction information. A purchasing agent, representing the buyer organization, can post RFQs to the database. Agents can also represent authorized suppliers. Supplier agents can access the RFQ database, review the details, and decide whether or not to post a bid. The purchasing agent reviews the proposals and selects one proposal among the many alternatives. The supplier agent is then notified of the award. Agents have potential applications for online auctions. Researchers at MIT have developed several prototype systems for online auctions. Kasbah is an online system for consumer-to-consumer electronic commerce based on a multi-agent platform. Users can create an agent, provide the agent with general preferences, and dispatch the agent to negotiate in an electronic marketplace. Têtê-à-têtê, also developed at MIT,12 is a negotiation system for business-to-consumer electronic commerce transactions. Merchants and customers can negotiate across multiple terms, including price, warranties, delivery times, service contracts, and other merchant value-added services. The Electric Power Research Institute has also 446

Software Agent Orientation: A New Paradigm developed agent-based E-commerce applications for the electric power industry.39 Intranet Applications An Intranet is a private internal network based on Internet protocols. Many organizations are implementing intranets for enhanced intraorganizational communications. Some of the major benefits of an Intranet are enhanced collaboration and improved information dissemination. Employees are empowered because they have access to the relevant information for decision making. Lotus Notes and other intranet software facilitate information sharing and group work. Beyond the convenient distribution of basic internal documents to employees, an intranet can improve communication and coordination among employees. Internal documents can be distributed to employees and groupware software can facilitate group work and collaboration. Professional services and meetings can be scheduled through engagement management software and calendars on an intranet, thus providing input from all parties and communicating current status to the individuals involved. Large amounts of a company’s information are made available to executives of that organization. As a result, executives must be able to find relevant information on an intranet. It is in this area that intelligent support for information search comes to play an important role. For example, the applications of agent technology in an intranet environment might include the automated negotiation and scheduling of meetings based on personal schedules and available resources. Monitoring The routine of checking the same thing over and over again is tedious and time consuming. However, by employing agents to do this task, automated surveillance ensures that each potential situation is checked any time the data changes, freeing decision makers to analyze and act on the information. This kind of agent is essentially a small software program written to perform background tasks. It typically monitors networks or databases and other information sources and flags data it has been instructed to find. In the past, information systems delivered limited information about critical issues, such as competitors. However, the expansion of the Internet and the Web facilitates the ability to deliver business intelligence to the executive. The use of monitoring agents can help companies gather information about the industry environment competitors. The collected information can be used for strategic planning purposes and for providing business intelligence. 447

PROVIDING APPLICATION SOLUTIONS Push Technology and Agents Push technology delivers a constant stream of information to the user’s desktop without having him search for it. Push providers use this technology to send information for viewing by their customers. A related function for agents can be filtering information before it hits the desktop, so users receive only the data they need or get warnings of exceptional data. Users may design their own agents and let them search the Web for specific information. The user can leave the agent working overnight and the next morning finds the results. Another function can be to check a Web site for structure and layout. Every time that the site changes or it is updated, the agent will “push” a notice to the user. The combination of push technology and intelligent agents can be used for news monitoring. One of the most useful applications of software agents is helping the user to select articles from a constant stream of news. There are several applications that can generate an electronic newspaper customized to the user’s own personal interests and preferences. Companies can gain “business intelligence” by monitoring the industry and its competitors. Information about the industry environment can be used for strategy development and long-term planning. Financial Applications A financial analyst sitting at a terminal connected to the global information superhighway is faced with a staggering amount of information available to make a decision about investments. The online information includes company information available for thousands of stocks. Added to this are online financial services and current data about companies’ long-term projects. The analyst faces an important dilemma: how to find the information that is relevant to his problem and how to use that information to solve it. The information available through the network is overwhelming, and the ability to appropriately access, scan, process, modify, and use the relevant data is crucial. A distributed agent framework called Retsina that has been used in financial portfolio management is described by Katia Sycara of Carnegie Mellon University. The overall portfolio-management task has several component tasks: eliciting (or learning) user profile information, collecting information on the user’s initial portfolio position, and suggesting and monitoring a reallocation to meet the user’s current profile and investment goals. Each task is supported by an agent. The portfolio manager agent is an interface agent that interacts graphically and textually with the user to acquire information about the user’s profile and goals. The fundamental analysis agent is a task assistant that acquires and interprets information about a stock’s fundamental value. The technical analysis agent uses numerical techniques to try to predict the near future in the stock market. The breaking news agent tracks and filters news stories and decides if they 448

Software Agent Orientation: A New Paradigm are so important that the user needs to know about them immediately, in that the stock price might be immediately affected. Finally, the analyst tracking agent tries to gather intelligence about what human analysts are thinking about a company. Users of a site that deals with stock market information spend a lot of time rechecking the site for new stock reports or market data. Users could be given agents that e-mail them when information relevant to their portfolio becomes available or changes. According to researchers at the City University of Hong Kong,40 intelligent agents are well suited for monitoring financial markets and detecting hidden financial problems and reporting abnormal financial transaction, such as financial fraud, unhedged risks, and other inconsistencies. Other monitoring tasks involve fraud detection, credit risk monitoring, and position risk monitoring. For example, an external environment monitoring agent could collect and summarize any movements in the U.S. Treasury Bond yield in order to monitor any financial risk that may result from movements within the bond market. Researchers at Georgia State University (GSU) have proposed a MultiAgent Decision Support System, which is an alternative to the traditional Data + Model Decision Support System.41 The system is composed of a society of agents classified into three categories according to the phases of Simon’s problem solving model. Herbert Simon21 proposed a model of human decision making that consists of three interdependent phases: intelligence, design, and choice. The intelligence phase involves searching or scanning the environment for problems and opportunities. In the intelligence phase, the environment is searched to find and formulate problem situations. Design phase activities include searching for, developing, and analyzing possible alternative courses of actions. This phase can be divided into search routine (find ready-made solutions) and design routine (used if no ready-made solution is available). In the choice phase, a course of action is chosen from the available alternatives. During the choice phase, further analysis is performed and an alternative is selected. The agent system developed at GSU is composed of intelligence-phase agents, design-phase agents, and choice-phase agents. It has been used to support investment decisions. Networking and Telecommunications Agent technology is also used for network management applications.42 Complex networks include multiple platforms and networking devices such as routers, hubs, and switches. Agents can monitor the performance of a device and report any deviation from regular performance to a network monitoring system. They are also used to execute critical functions, including performance monitoring, fault detection, and asset management. 449

PROVIDING APPLICATION SOLUTIONS Software agents can be used to monitor network performance and also in the analysis of the traffic and load on a network. Agents can identify overutilized or underutilized servers or resources based on prespecified performance objectives. This information can be used for improving network design and for identifying required changes to network configuration and layout. Based on the information collected by the agents, the network manager can determine if it is time to increase server capacity or increase the available bandwidth. They can also monitor network devices, report unusual situations, and identify faults and errors. Using instant fault notification, network downtime can be reduced. Some sample error detection tasks include detecting a broadcast storm, a faulty server, or damaged Ethernet frames. Agents can also be organized in a hierarchical arrangement. Low-level agents can monitor local devices and attempt to fix minor problems. More serious problems can be reported to a centralized node, and critical problems can be routed directly to the network administrator using an automatic e-mail/pager alerting system. Many networking equipment manufacturers have realized the benefits of agent technology and they provide device-specific agents for routers, hubs, and switches. Distance Education Software agents are used for supporting distance education. For example, an agent developed at Chun Yuan Christian University uses a diagnosis problem-solving network to provide feedback to distance learning students.43 Students solve mathematical problems and the agent checks the results, provides a diagnostic result, and suggests remedial action. SAFARI, developed at the Heron Labs of Middle East Technical University, is an intelligent tutoring system with a multi-agent architecture.44 The intelligent agents provide student guidance in Web-based courses. They can also customize content based on student performance. Software agents can also provide tutoring capabilities in a virtual classroom.38 For example, The Advance Research Projects Agency’s Computer Assisted Education and Training Initiative (CAETI) provides individualized learning in combination with a group-learning approach based on a multiuser environment and simulation. Manufacturing A classical problem in manufacturing is scheduling, which involves the optimal allocation of limited resources among parallel and sequential activities.45 There are many traditional approaches to the manufacturing scheduling problem. Traditional heuristic techniques use a trial-and-error approach or an iterative algorithm. According to Weiming Shen,45 with the National Research Council of Canada, in many respects agents use a supe450

Software Agent Orientation: A New Paradigm rior approach to developing schedules. Agents use a negotiation approach, which is more similar to the approach used by organizations in the real world. Agents can model not just the interaction at the shop floor level, but also at the supply-chain level. The National Center for Manufacturing Sciences, based in Ann Arbor, Michigan, has developed the Shop Floor Agents project.46 This is an applied agent-based system for shop floor scheduling and machine control. The prototype system was implemented in three industrial scenarios sponsored by AMP, General Motors, and Rockwell Automation/Allen-Bradley. The PABADIS system, a multi-agent system developed at the Ecole des Mines d’Ales in France, uses mobile agents based on production orders issued by an enterprise resource planning (ERP) system.2 The PABADIS virtual factory system enables the configuration and reconfiguration of a production system. It uses an auction coordination mechanism in which agents negotiate based on bid-allocation and temporal feasibility constraints. Infosys Technologies Ltd. has developed an agent-oriented framework for sales order processing.47 The framework, called Agent-Based Sales Order Processing System (AESOPS), allows logistics personnel to conceptualize, design, and build a production environment as a set of distributed units over a number of physical locations. These production units can interact with each other to process any order to completion in a flexible yet consistent and efficient manner. Healthcare According to the Health Informatics Research Group at the Universiti Sains Malaysia, the utilization of agent technology in healthcare knowledge management is a highly viable solution to providing necessary assistance to healthcare practitioners while procuring relevant healthcare knowledge.48 This group has developed a data mining agent in which the core functionality is to retrieve and consolidate data from multiple healthcare data repositories.49 The potential decision-support/strategic-planning applications include analysis of hospital admission trends and analysis of the costeffectiveness of healthcare management. The system contains a defined taxonomy of healthcare knowledge and a standard healthcare vocabulary to achieve knowledge standardization. It also has medical databases that contain data obtained from various studies or surveys. The clinical case bases contain “snapshots” of actual past clinical cases encountered by healthcare practitioners and other experts. 451

PROVIDING APPLICATION SOLUTIONS ISSUES SURROUNDING AGENTS Several implementation and technical issues must be resolved before broad use of agent technologies can take off. These are discussed below. Implementation Issues Unrealistically high expectations promoted by computer trade magazines and software marketing literature may affect the implementation of intranet/Internet systems and agent software. Before implementing a new Internet technology, managers and users should be aware that a new technology does not guarantee improved productivity by itself. Implementing the most expensive or sophisticated intranet system may not necessarily provide benefits reflected in improved productivity; in fact, a less costly system may provide the same benefits. There are instances where systems may be implemented as easily, without using sophisticated agents. It is important to carefully plan agents’ implementation to truly increase productivity and reduce the risk of implementation failure. It has been proposed that Internet delivery is a proven way to improve information deployment and knowledge sharing in organizations. However, more understanding is needed of the effects of information delivery in decision making and how information delivery is influenced by other variables. Many companies have implemented intranet sites after large investments, expecting improved information deployment and ultimately better decision making. However, developers and implementers should also take into account nontechnical issues. It is important to train and educate employees and managers to take full advantage of agent technologies. Managers read articles that promise increases in revenue and productivity by implementing agent technology, and they want to implement the same technologies in their companies. They may, however, have no clear vision of how agents can enhance existing processes or improve information deployment. There are many arguments that agent technology will lead to productivity improvements, but some of these arguments have not been tested in practice. Agents are not a panacea, and problems that have troubled software systems are also applicable to agents. Miscellaneous Technical Issues There are many technical issues that need to be resolved, such as the development of standards that facilitate the interaction of agents from different environments, the integration of legacy systems and agents, and security concerns regarding cash handling. Legacy systems are usually mainframe based and were initially set up long before the widespread adoption of the Internet. Therefore, mainframe systems use nonroutable 452

Software Agent Orientation: A New Paradigm protocols, which are not compatible with the TCP/IP family of Internet protocols. This intrinsic limitation makes the integration of legacy systems and agent technology very difficult. To exploit the full synergy of these two technologies, there is a need for middleware and interfacing systems. CONCLUSION Information systems managers and researchers should keep a close eye on agents because they offer an excellent alternative for managing information. Many corporate information technology managers and application developers are considering the potential business applications of agent technologies. This represents a new approach to software development. Most agent programs today are site specific; companies are adding agents to their Web sites with knowledge and features specific to the business of that organization. At the most fundamental level, agents provide sites with value-added services that leverage the value of their existing content. Some companies that may benefit from agents include: • Information publishers such as news and online services, which can filter and deliver information that satisfies subscribers’ personalized search profiles by the use of personalized agents. • Companies implementing intranets, which provide their employees with monitoring of industry news, product developments, data about competitors, e-mail, groupware software, and environmental scanning. • Product vendors can provide customer support by informing customers about new products, updates, tips, and documentation, depending on each customer’s personalized profile. Businesses could provide other agents that automate customer service and support, disseminate, or gather information, and generally save the user from mundane and repetitive tasks. As the Web matures, these valueadded services will become critical in differentiating Web sites from their competition and maximizing the content on the site. The evolution of agents will undoubtedly impact future work practices. Current technology is already delivering benefits to users. By introducing more advanced functionality and additional autonomy to agents in an incremental way, organizations will benefit more from this technology. Systems and applications are designed based on the agent approach. Agent orientation is no longer an emerging technology but rather a powerful approach that is pervasive in systems and applications in almost every single area. In summary, the agent paradigm is the next step in the evolution of software development after object orientation. 453

PROVIDING APPLICATION SOLUTIONS References 1. Wooldridge, M. and Jennings, N., “Intelligent Agents: Theory and Practice,” The Knowledge Engineering Review, 10(2), 115–152, 1996. 2. Diep, D., Masotte, P., Reaidy, J., and Liu, Y.J., “Design and Integration of Intelligent Agents to Implement Sustainable Production Systems,” in Proc. of the Second Intl. Symposium on Environmentally-Conscious Design and Inverse Manufacturing ECODESIGN, 2001, 729–734. 3. Barker, J.A., Future Edge: Discovering the New Paradigms of Success, William Morrow and Company, 1992, chap. 3–6. 4. Kinny, D., Georgeff, M., and Rao, A., “A Methodology and Modeling Technique for Systems of BDI Agents,” in Agents Breaking Away, Springer-Verlag, 1996, 56–71. 5. Wagner, G., Call for position papers, Agent-Oriented Information Systems Web site (www.aois.org). 6. O’Malley, S.A. and DeLoach, S.A., “Determining When to Use an Agent-Oriented Software Engineering Paradigm,” in Agent-Oriented Software Engineering II LNCS 2222, Second International Workshop, AOSE 2001, Montreal, Canada, May 29, 2001, Wooldridge, M.J. and Ciancarini, W.P., Eds., Springer-Verlag, Berlin, 2001, 188. 7. Wooldridge, M., Muller, J.P., and Tambe, M., “Agent Theories, Architectures, and Languages: A Bibliography,” in Intelligent Agents II, Agent Theories, Architectures and Languages, IJCAI 1995 Workshop, Montreal, Canada, 1995, 408–431. 8. Debenham, J. and Henderson-Sellers, B., “Full Lifecycle Methodologies for Agent-Oriented Systems — The Extended OPEN Process Framework,” in Agent-Oriented Information Systems at CAiSE’02, 27–28, May 2002, Toronto, Ontario, Canada. 9. Berners-Lee, T., Cailliau R., Loutonen, H., Nielsen, F., and Secret, A., “The World-Wide Web,” Communications of the ACM, 37(8), 76–82, August 1994. 10. Daig, L., “Position Paper,” ACM SigComm’95 -MiddleWare Workshop, April 1995. 11. Bates, J., "The Role of Emotion in Believable Characters," Communications of the ACM, 37(7), 122–125, 1994. 12. Maes, P., “Agents that Reduce Work and Information Overload,” Communications of the ACM, 37(7), 31–40, 1994. 13. Newell, S., “User Models and Filtering Agents for Improved Internet Information Retrieval,” User Modeling and User-Adapted Interaction, 7(4), 223–237, 1997. 14. Castelfranchi, C., “Guarantees for Autonomy in Cognitive Agent Architecture,” in Intelligent Agents, Proceedings of the ECAI-94 Workshop on Agent Theories, Architectures and Languages, Amsterdam, The Netherlands, 1994, 56–70. 15. Guha, R.V. and Lenat, D.B., “Enabling Agents to Work Together,” Communications of the ACM, 37(7), 127–141, 1994. 16. Guichard, F. and Ayel, J., “Logical Reorganization of Distributed Artificial Intelligence Systems, Intelligent Agents,” in Proc. of the ECAI-94 Workshop on Agent Theories, Architectures and Languages, Amsterdam, The Netherlands, 1994, 118–128. 17. Haddadi, A., Communication and Cooperation in Agent Systems, Springer-Verlag, 1996, 1–2, 52–53. 18. Lashkari, Y., Metral, M., and Maes, P., “Collaborative Interface Agents,” in Proc. of the National Conference on Artificial Intelligence, July 31-Aug. 4, 1994, 444–449, Seattle, Washington. 19. Katz, D. and Kahn, R.L., “Common Characteristics of Open Systems, in Systems Thinking, Penguin Books, Middlesex, England 1969, 86–104. 20. Bertalanffy, L. von, "General System Theory,” in General Systems Textbook, Vol.1, 1956. 21. Simon, H.A., “Decision Making and Problem Solving,” Interfaces, 17(5), 11–31, SeptemberOctober 1987. 22. Sommerhoff, G., “The Abstract Characteristics of Living Systems,” in Systems Thinking, Penguin Books, Middlesex, England, 1969, 147–202. 23. Van Gigch, J.P., Systems Design Modeling and Metamodeling, Plenum Press, New York, 1991. 24. d’Inverno, M. and Luck, M., “Formalising the Contract Net as a Goal-Directed System,” in Agents Breaking Away, Springer-Verlag, 1996, 72–85. 25. Norman, D., “How Might People Interact with Agents,” Communications of the ACM, 37(7), 68–76, 1994. 26. Ashby, W.R., “Adaptation in the Multistable Environment,” in Design for a Brain, 2nd ed., Wiley, New York, 1960, 205–214.

454

Software Agent Orientation: A New Paradigm 27. Giroux, S., “Open Reflective Agents,” in Intelligent Agents II, Agent Theories, Architectures and Languages, IJCAI 1995 Workshop, Montreal, Canada, 1995, 315–330. 28. Mitchell, T. et al., “Experience with a Learning Personal Assistant,” Communications of the ACM, 37(7), 81–91, 1994. 29. Cunnyngham, J., “Cybernetic Design for a Strategic Information System,” in Applied Systems and Cybernetics, Lasker, G.E. Ed., Pergamon Press, New York, 1980, 920–925. 30. Lieberman, H., “Letizia, A User Interface Agent for Helping Browse the World Wide Web,” International Joint Conference on Artificial Intelligence, Montreal, Canada, August 1995. 31. Vogler, H., Moschgath, M., and Kunkelman, T., “Enhancing Mobile Agents with Electronic Commerce Capabilities,” in Cooperative Information Agents II, Proceedings of the Second International Workshop, CIA 1998, Klusch, M. and Weib, G., Eds., Paris, France, July 1998, Springer-Verlag, Germany, 148–159. 32. Murch, R. and Johnson, T., Intelligent Software Agents, Prentice Hall, Upper Saddle River, NJ, 1999. 33. Brewington, B. et al., “Mobile Agents for Distributed Information Retrieval,” in Intelligent Information Agents, Klusch, M., Ed., Springer-Verlag, Germany, 1999, 354–395 34. Hutheesing, N., “Spider’s Helper,” Forbes, 158(1), 79, July 1, 1996. 35. Etzioni, O. and Weld, D., “A Softbot-Based Interface to the Internet,” Communications of the ACM, 37(7), 72–76, 1994. 36. Selberg, E. and Etzioni O., “The MetaCrawler Architecture for Resource Aggregation on the Web,” IEEE Expert, 12(1): 8–14, 1997. 37. Kowalczyk, R. et al., “InterMarket — Towards Intelligent Mobile Agent e-Marketplaces,” in Proceedings of the Ninth Annual IEEE International Conference and Workshop on the Engineering of Computer-Based Systems, April 8–11, 2002, Lund Swenden, IEEE.. 38. O’Leary, D.E., Kuokka, D., and Plant, R., “Artificial Intelligence and Virtual Organizations,” Communications of the ACM, 40:1, January 1997, 52. 39. EPRI, E-Commerce Applications and Issues for the Power Industry, EPRI technical report, (TR-114659), 2000. 40. Wang, H., Mylopoulos, J., and Liao, S., “Intelligent Agents and Financial Risk Monitoring Systems,” Communications of the ACM, 45(3), 83, March 2002. 41. Fazlollahi, B. and Vahidov, R., “Multi-Agent Decision Support System Incorporating Fuzzy Logic,” 19th International Conference of the North-American Fuzzy Information Processing Society, 13–15 July 2000, Atlanta, GA, 246–250. 42. Muller, N.J., “Improving Network Operations with Intelligent Agents,” International J. of Network Management, 7, 116–126, 1997. 43. Chang, J. et al., “Implementing a Diagnostic Intelligent System for Problem Solving in Instructional Systems,” in Proc. of the International Workshop on Advanced Learning Technologies 2000, 29–30. 44. Ozdemir, B. and Alpaslan, F.N., “An Intelligent Tutoring System for Student Guidance in Web-Based Courses,” in Fourth International Conference on Knowledge-Based Intelligent Engineering Systems and Allied Technologies, August 30–September 1, 2000, Brighton, UK. 45. Shen, W., “Distributed Manufacturing Scheduling Using Intelligent Agents,” IEEE Intelligent Systems, (17)1: 88–94, 2002. 46. Parunak, H.V.D., Workshop Report: Implementing Manufacturing Agents (in conjunction with PAAM 96), National Center for Manufacturing Sciences, Ann Arbor, MI, 1996. 47. Mondal, A.S., A Multi-Agent System for Sales Order Processing, Intelligence, 32, Fall 2001. 48. Zaidi, S.Z.H., Abidi, S.S.R., and Manickam, S., “Distributed Data Mining from Heterogeneous Healthcare Data Repositories: Towards an Intelligent Agent-Based Framework,” in Proc. of the 15th IEEE Symposium on Computer-Based Medical Systems, 2002, 339–342. 49. Hashmi, Z.I., Abidi, S.S.R., and Cheah, Y.N., “An Intelligent Agent-Based Knowledge Broker for Enterprise-Wide Healthcare Knowledge Procurement,” in Proc. of the 15th IEEE symposium on Computer-Based Medical Systems, 2002.

455

This page intentionally left blank

Chapter 37

The Methodology Evolution: From None, to One-Size-Fits-All, to Eclectic Robert L. Glass

Methodology — the body of methods used in a particular branch of activity Method — a procedure or way of doing something — Definitions from the Oxford American Dictionary

To use a methodology is to choose an orderly, systematic way of doing something. At least that is the message the dictionary brings us. But what does that really mean in the context of the systems and software field? There has been a quiet evolution in that real meaning over the last few decades. In the beginning (the 1950s), there were few methods and no methodologies. Solution approaches tended to focus attention on the problem at hand. Because methodologies did not exist, systems developers chose from a limited collection of “best-of-breed” methods. Problem solution was difficult, but with nothing much to compare with, developers had the feeling they were making remarkable progress in solving application problems. That “best-of (primitive method)-breed” approach persisted through a decade or two of the early history of the systems development field. And then suddenly (in the 1970s), the first real methodology burst forth, and the systems development field would never be the same again. Not only was the first methodology an exciting, even revolutionary, addition to the 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

457

PROVIDING APPLICATION SOLUTIONS field, but the assumption was made that this one methodology (structured analysis and design, later to be called the “structured revolution”) was suitable for any systems project the developer might encounter. We had passed from the era of no methodology to the era of best methodology. Some, looking back on this era, refer to it as the one-size-fits-all era. But as time went by, what had begun as a field characterized by one single best methodology changed again. Competing methodologies began to appear on the scene. There was the information engineering approach. There was object orientation. There was event-driven systems development. What had been a matter of simple choice had evolved into something very complex. What was going on here? With this brief evolutionary view of the field, let us go back over some of the events just described to elaborate a bit more on what has been going on in the methodology movement and where we are headed today. TOOLS AND METHODS CAME FIRST In the early days, the most prominent systems development tools were the operating system and the compiler. The operating system, which came along in the mid–late 1950s, was a tool invented to allow programmers to ignore the bare-bones software interface of the computer and to talk to that interface through intermediary software. Then came high-order language (HOL), and with it the compiler to translate that HOL into so-called machine code. HOLs such as Fortran and COBOL became popular quickly; the majority of software developers had chosen to write in HOL by the end of the 1950s. Shortly thereafter, in the early 1960s, a more generous supply of support tools to aid in software development became available. There were debuggers, to allow programmers to seek, find, and eliminate errors in their code. There were flowcharters, to provide automated support for the drawing of design representations. There were structural analyzers, to examine code searching for anomalies that might be connected with errors or other problems. There were test drivers, harnesses for testing small units of software. There were error reporters, used for tracking the status of errors as they occurred in the software product. There were report generators, generalized tools for making report creation easy. And with the advent of these tools came methods. It was not enough to make use of one or more tools; methods were invented to describe how to use them. By the mid to late 1960s there was, in fact, a thriving collection of individual tools and methods useful to the software developer. Also evolving at a steady but slower rate was a body of literature describing better ways to 458

The Methodology Evolution: From None, to One-Size-Fits-All, to Eclectic use those tools and methods. The first academic computer science (CS) program was put in place at Purdue University in the early 1960s. Toward the end of that decade, CS programs were beginning to become commonplace. At about the same time, the first academic information systems programs began to appear. The literature, slow to evolve until the academic presence began, grew rapidly. What was missing, at this point in time (the late 1960s), was something that tied together all those evolving tools and methods in some sort of organized fashion. The CS and IS textbooks provided some of that organization, but the need was beginning to arise for something more profound. The scene had been set for the appearance of the concept of “methodology.” THE METHODOLOGY During the 1970s, methodologies exploded onto the software scene. From a variety of sources — project work done at IBM, analytical work done by several emerging methodology gurus, and with the active support and funding of the U.S. Department of Defense, the “structured methodologies” sprang forth. At the heart of the structured methodologies was an analysis and design approach — structured analysis and design (SA&D). There was much more to the structured methodologies than that — the Department of Defense funded IBM’s development of a 15-volume set of documents describing the entire methodological package, for example — but to most software developers, SA&D was the structured methodology. Textbooks describing the approach were written. Lectures and seminars and, eventually, academic classes in the techniques were conducted. In the space of only a few years during the 1970s, SA&D went from being a new and innovative idea to being the established best way to build software. By 1980 few software developers had not been trained/educated in these approaches. What did SA&D consist of? For a while, as the popularity of the approach boomed, it seemed as if any idea ever proposed by any methodology guru was being slipped in under the umbrella of the structured methodologies, and the field covered by that umbrella became so broad as to be nearly meaningless. But at heart there were some specific things meant by SA&D: • Analysis. Requirements elicitation, determination, analysis — obtaining and structuring the requirements of the problem to be solved: — The data flow diagram (DFD), for representing the processes (functions/tasks) of the problem, and the data flow among those processes — Process specifications, specifically defining the primitive (rudimentary or fundamental) processes of the DFD; entity/relationship (E/R) diagrams representing the data relationships 459

PROVIDING APPLICATION SOLUTIONS • Design. Top-down design, analyzing the most important parts of the problem first: — Transformation analysis to convert the DFDs into structure charts (representing process design) and thence to pseudocode (detail level process design) • Coding. Constructing single entry/exit modules consisting only of the programming concepts sequence, selection, and iteration. TOWARD THE HOLY GRAIL OF GENERALITY An underlying theoretical development was also happening in parallel with the practical development of the concept: an evolution from the problemspecific approaches of the early days of computing toward more generalized approaches. The early approaches were thought of as “ad hoc,” a term that in CS circles came to mean “chaotic and disorganized,” perhaps the worst thing that could be said about a software development effort. Ad hoc, in effect, became a computing dirty word. As tools and methods and, later, methodologies evolved, the field appeared to be approaching a “holy grail” of generality. There would be one set of tools, one set of methods, and one methodology for all software developers to use. From the early beginnings of Fortran and COBOL, which were problem-specific languages for the scientific/engineering and business/information systems fields, respectively, the field evolved toward more general programming languages, defined to be suitable for all application domains. First, PL/1 (sometimes called the “kitchen sink” language because it explicitly combined the capabilities of Fortran and COBOL) and later Pascal and C/C++/Java were deliberately defined to allow the solution of all classes of problems. But soon, as mentioned earlier, cracks appeared in this veneer of generality. First, there was information engineering, a data/information-oriented methodology. Information engineering not only took a different approach to looking at the problem to be solved, but in fact appeared to be applicable to a very different class of problem. Then there was object orientation (OO), which focused on a collection of data and the set of processes that could act on that data. And, most recently, there was the event-driven methodology, best personified by the many “visual” programming languages appearing on the scene. In the event-driven approach, a program is written as a collection of event servicers, not just as a collection of functions or objects or information stores. The so-called Visual languages (Visual Basic is the best example) allowed system developers to create graphical user interfaces (GUIs) that responded to user-created “events.” Although many said that the event-driven approach was just another way of looking at problems from an OO point of view, the fact that Visual Basic 460

The Methodology Evolution: From None, to One-Size-Fits-All, to Eclectic is a language with almost no object capability soon made it clear that events and objects are rather different things. If the software world only needed one holy grail approach to problem solution, why was this proliferation of competing methodologies occurring? The answer for the OO advocates was fairly straightforward — OO was simply a better approach than the now obsolete structured approaches. It was a natural form of problem solution, they said, and it led more straightforwardly to the formation of a culture of reuse, in which components from past software efforts could be used like Lego™ building blocks to build new software products. But the rise of information engineering before the OO approaches, and event-driven after them, was perplexing. It was fairly clear to most who understood both the structured and information approaches, that they were appropriate for rather different kinds of problems. If a problem involved many processes, then the structured approaches seemed to work best. If there was a lot of data manipulation, then the information approaches worked best. And the event approach was obviously characterized by problems where the need was to respond to events. Thus it had begun to appear that the field was reverting to a more problem-focused approach. Because of that, a new interest arose in the definition of the term “ad hoc.” The use of “ad hoc” to mean chaotic and disorganized, it was soon learned, was wrong. Ad hoc really means focused on the problem at hand. TROUBLE ON THE GENERALITY RANCH Meanwhile, there was additional trouble with the one-size-fits-all view. Researchers in another part of the academic forest began looking at systems development from a new point of view. Instead of defining a methodology and then advocating that it be used in practice — a prescriptive approach that had been used for most of the prior methodologies — they began studying instead what practitioners in the field actually did with methodologies. These “method engineering” researchers discovered that most practitioners were not using methodologies as the methodology gurus had expected them to. Instead of using these methodologies “out of the box,” practitioners were bending and modifying them, picking and choosing portions of different methodologies to use on specific projects. According to the research findings, 88 percent of organizations using something as ubiquitous as the structured methodology were tailoring it to meet their specific project needs. At first, the purists made such statements as “the practitioners are losing the rigorous and consistent capabilities that the methodologies were invented to provide.” But then, another viewpoint began to emerge: 461

PROVIDING APPLICATION SOLUTIONS researchers began to accept as fait accompli that methodologies would be modified, and began working toward providing better advice for tailoring and customization and defining methodological approaches that lent themselves to tailoring and customizing. In fact, the most recent trend among method engineers is to describe the process of modifying methods and to invent the concept of “meta-modeling,” an approach to providing modifiable methods. This evolution in viewing methodologies is still under way. Strong factions continue to see the general approach as the correct approach. Many advocates of the OO methodology, for example, tend to be in this faction, and they see OO as the inevitable holy grail. (It is no accident that in OO’s most popular commercial modeling language, UML, the “U” stands for “Unified,” with an implication that it really means “Universal.”) Most data from practitioner surveys shows, however, that the OO approaches have been very slow taking hold. The structured methodology still seems to dominate in practice. A PROBLEM-FOCUSED METHODOLOGICAL APPROACH There is certainly a strong rationale for the problem-focused methodological approach. For one thing, the breadth of problems being tackled in the computing field is enormous and ever increasing. Do we really imagine that the same approach can be used for a hard real-time problem that must respond to events with nanosecond tolerances, and an IS problem that manipulates enormous quantities of data and produces a complex set of reports and screens? People who see those differences tend to divide the software field into a diverse set of classes of problems based on size, application domain, criticality, and innovativeness, as follows: • Size. Some problems are enormously more complicated than others. It is well-known in the software field that for every tenfold increase in the complexity of a problem, there is a one-hundredfold increase in the complexity of its solution. • Application domain. There are very different kinds of problems to be solved: — Business systems, characterized by masses of data and complex reporting requirements — Scientific/engineering systems, characterized by complex mathematical sophistication — System programming, the development of the tools to be used by application programmers — Hard real-time systems, those with terribly tight timing constraints — Edutainment, characterized by the production of complex graphical images 462

The Methodology Evolution: From None, to One-Size-Fits-All, to Eclectic • Criticality. Some problem solutions involve risking lives and/or huge quantities of money. • Innovativeness. Some problems simply do not lend themselves to traditional problem-solving techniques. It is clear to those who have been following the field of software practice that it would be extremely difficult for any methodology to work for that enormously varied set of classes of problems. Some classes require formal management and communication approaches (e.g., large projects); others may not (e.g., small and/or innovative projects). Some require specialized quality techniques, such as performance engineering for hard real-time problems and rigorous error-removal approaches for critical problems. There are domain-specific skill needs, such as mathematics for the scientific/engineering domain and graphics for edutainment. The fragmentation of the methodology field, however, has left us with a serious problem. There is not yet a mapping between the kinds of solution approaches (methodologies) and the kinds of problems. Worse yet, there is not even a generally accepted taxonomy of the kinds of problems that exist. The list of problem types described previously, although generally accepted at a superficial level, is by no means accepted as the definitive statement of what types of problems exist in the field. And until a generally agreed-on taxonomy of problem types exists, it will be nearly impossible to produce that much-needed mapping of methodologies to problems. Even before such practical problems can be solved, an attitudinal problem must also be overcome. The hope for that “holy grail” universal solution tends to steer enormous amounts of energy and brilliance away from the search for better problem-focused methodologies. The computing field in general does not yet appear ready to move forward in any dramatic way toward more problem-specific solution approaches. Some of the method engineering people are beginning to move the field forward in some positive, problem-focused ways. Others are holding out for another meta-methodology, with the apparent hope that there will be a giant umbrella over all of these specialized methodologies — a new kind of “holy grail” of generality. THE BOTTOM LINE My own storytelling suggests that the methodology movement has moved from none (an era when there were no methodologies at all and problemsolution approaches focused on the problem at hand) to one-size-fits-all (there was one single best methodology for everyone to use) to prolific (there were apparently competing choices of which “best” methodology to use) to tailored (methodology choices were back to focusing on the problem at hand). 463

PROVIDING APPLICATION SOLUTIONS Not everyone, however, sees the topic of methodology in this same way. There are those who still adhere to a one-size-fits-all view. There are those who think that tailoring methodologies is wrong. There are those who point to a lack of a taxonomy of applications, or a taxonomy of methodologies, or an ability to map between these (missing) taxonomies, as evidence that the field is not yet ready for methodologies focused on the problem at hand. The methodology field, like the software engineering field of which it is a part, is still young and immature. What does this mean for knowledgeable managers of software projects? First, they must stay on top of the methodology field because its sands are shifting frequently. Second, for now, they must expect that no single methodology will solve the whole problem. They must be prepared for some technical, problem-focused tinkering with standard methodologies. Further, many larger software projects today involve a three-tiered solution — a user interface (front end) tier, a database/Internet (back end) tier, and the application problem solution (middle) tier. Each of those tiers will tend to need a different methodological approach: • The front end will probably be attacked with an event-driven GUI builder, probably using one of the Visual programming languages. • The back end will likely be addressed using an information-based database system using SQL, or an object-oriented Internet system, perhaps using Java. • The middle tier will be addressed by a problem-focused methodology, perhaps the structured approaches for process-oriented problems, information engineering for data-focused problems, or an object-oriented approach for problems that involve a mixture of data objects and their associated processes. Event-driven plus information-based or object-oriented plus some combination of the above? Does that not mean that systems development is becoming enormously more complex? The answer, of course, is yes. But there is another way of looking at this plethora of problem-solving approaches. The toolbox of a carpenter contains much more than one simple, universal tool. Should we not expect that the toolbox of a systems developer be diverse as well?

464

Chapter 38

Usability: Happier Users Mean Greater Profits Luke Hohmann

WHY YOU SHOULD CARE ABOUT USABILITY Historically, usability has been associated with the user interface, and human–computer interaction (HCI) professionals have tended to concentrate their work on it. This makes perfect sense because most of our understanding of a given system and most of our perceptions of its usability are shaped by the user interface. Usability is, however, much deeper than the user interface. Usability refers to the complex set of choices that ends up allowing the users of the system to accomplish one or more specific tasks easily, efficiently, enjoyably, and with a minimum of errors. In this case, “users of the system” refers to all users: • System administrators who install, configure, customize, and support your system • Developers and system integrators who integrate your system with other applications • End users who use the system directly to accomplish their tasks, from basic data entry in corporate systems to strategic decision making supported through complex business information systems Many of these choices are directly influenced by your technical architecture. That is, if your system is perceived as usable, it will be usable because it was fundamentally architected to be usable. Architecting a system to be usable is a lot of work but it is worth it. A large amount of compelling evidence indicates that usability is an investment that pays for itself quickly over the life of the product. A detailed analysis of the economic impact of usability is beyond the scope of this chapter but anec0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

465

PROVIDING APPLICATION SOLUTIONS dotal evidence speaks strongly about the importance of usability. One system the author of this chapter worked on was used by one of the world’s largest online retailers. A single telephone call to customer support could destroy the profits associated with two dozen or more successful transactions. In this case, usability was paramount. Other applications may not be quite as sensitive to usability but practical experience demonstrates that executives routinely underestimate the importance of usability. The benefits of usable systems include any or all of the following, each of which can be quantified: • • • • • •

Reduced training costs Reduced support and service costs Reduced error costs Increased productivity of users Increased customer satisfaction Increased maintainability

Given the wide range of areas in which improving usability can reduce costs and increase profits, it is easy to see why senior managers should care about usability, and make creating usable systems a primary requirement of all development efforts. CREATING USABLE SYSTEMS Creating usable applications centers around four key processes: 1. Understanding users. The cornerstone of creating usable applications is based on an intimate understanding of the users, their needs, and the tasks that must be accomplished. The outcome of this understanding results in a description of the users’ mental model. A mental model is the representation of the problem users have formed as they accomplish tasks. Understanding mental models enables designers to create system models that supplement and support users. System models are, in turn, conveyed to users through the use of metaphors. 2. A progression from “lo-fidelity” to “hi-fidelity” systems. Building usable applications is based on a gradual progression from “lo-fidelity” paper-and-pencil-based prototypes to “hi-fidelity” working systems. Such an approach encourages exploration through low-cost tools and efficient processes until the basic structure of the user interface has been established and is ready to be realized as a working computer system. 3. Adherence to proven principles of design. Through extensive empirical studies, HCI professionals have published several principles to guide the decisions made by designers. These simple and effective principles transcend any single platform and dramatically contrib466

Usability: Happier Users Mean Greater Profits

8VHUDQG7DVN$QDO\VLV 8VHU$QDO\VLV

7DVN$QDO\VLV

)XQFWLRQ$VVLJQPHQW

0HQWDO0RGHO 'HYHORSPHQW

Exhibit 1. User and Task Analysis

ute to usability. The use of design principles is strengthened through the use of usability specifications, quantifiable statements used to formally test the usability of the system (e.g., the application must load within 40 seconds). If you are new to usability, start by focusing on proven principles of design. If you want to formalize the goals you are striving to achieve, require your marketing or product management organizations to include usability specifications in your requirements documents. 4. Usability testing. Each result produced during development is tested — and retested — with users and iteratively refined. Testing provides the critical feedback necessary to ensure designers are meeting user needs. An added benefit of testing is that it involves users throughout the development effort, encouraging them to think of the system as something they own and increasing system acceptance. UNDERSTANDING USERS The cornerstone of creating usable applications is based on an intimate understanding of the users, their needs, and the tasks that must be accomplished. One of the very best ways to create this understanding is through a simple user and task analysis. Once these are complete, a function assignment can be performed to clearly identify the distribution of tasks between the user and the system, which leads to the development of the mental model (see Exhibit 1). 467

PROVIDING APPLICATION SOLUTIONS User Analysis The purpose of a user analysis is to clearly define who the intended users of the system really are, through a series of context-free, open-ended questions. Such questions might include: • Experience: — What is the expertise of the users? Are they experts or novices? — Are they comfortable with computers and GUIs? — Do they perform the task frequently or infrequently? — What problem domain language would the users most easily understand? • Context: — What is the working environment? — Is work done alone or in a group? Is work shared? — Who installs, maintains, and administers the system? — Are there any significant cultural or internationalization issues that must be managed? • Expectations: — How would the users like the system to work? — What features do they want? (If users have difficulty answering this question, propose specific features and ask if the user would like or dislike the specific feature). — How will the current work environment change when the system is introduced? (Designers may have to propose specific changes and ask if these changes would be considered desirable). Asking these questions usually takes no more than a few hours, but the data the answers provide is invaluable to the success of the project. Task Analysis Task analysis seeks to answer two very simple questions: (1) What tasks are the users doing now that the system will augment, change, enhance, modify, or replace? (2) What tasks will the users perform in the new system? The first phase of task analysis is to develop a clear understanding of how the system is currently being used, using the process outlined in Exhibit 2. The last step, that of creating an overall roadmap, is especially important for projects involved with replacing an existing user interface with a redesigned user interface. Common examples of this include replacing aging mainframe systems with modern Web-based applications or creating an entirely new system to work beside an existing system, such as when a voice-based application is added to a call center. 468

Usability: Happier Users Mean Greater Profits

y 8VHYLGHRWDSH y &RQWH[WIUHHTXHVWLRQV

y :KDWLVWKHELJSLFWXUH"

Exhibit 2. Task Analysis

The second phase of task analysis, that of describing how the new system will work, is often done through use cases. A use case is a structured prose document that describes the sequence of events between one or more actors and the system as one of the actors (typically the user) attempts to accomplish some task. Chapter 40 discusses use cases in more detail. Function Assignment As users and tasks are identified, the specific functions detailed in the requirements spring to life and are given meaning through use cases. At this stage, it is often appropriate to ask if the identified tasks should be performed by the user, performed by the system automatically on behalf of the user, or initiated by the user but performed by the system. This process is called function assignment, and can be absolutely essential in systems in which the goals are to automate existing business processes. To illustrate, consider an electronic mail system. Most automatically place incoming mail into a specific location, often an “in-box.” As a user of the system, did you explicitly tell the mail system to place mail there? No. It did this on your behalf, usually as part of its default configuration. Fortunately, for those of us who are heavy e-mail users, we can override this default configuration and create a variety of rules that allow us to automatically sort and process incoming e-mails in a variety of creative ways. 469

PROVIDING APPLICATION SOLUTIONS While most systems can benefit from a function assignment, it is an optional step in the overall development effort. Mental Model Development The final step of user and task analysis is to propose various mental models of how the users think about and approach their tasks. Mental models are not documented in any formal manner. Instead, they are informal observations about how designers think users approach their tasks. For example, consider a designer creating a new project planning tool. Through several interviews she has discovered that managers think of dependencies within the project as a web or maze instead of a GANTT or PERT chart. This could provide insight into creative new ways of organizing tasks, displaying information, or providing notification of critical path dependencies. Checklist • We have identified a target population of representative users. • We have created an overall roadmap of current users’ tasks. — If redesigning a current system, we have made screen snapshots of every “screen” and have annotated each screen with a description of the task(s) it supports. — If redesigning a current system, each task has a set of screen snapshots that describe, in detail, how users engage the system to accomplish the task. • We have created a set of high-level use cases that document our understanding of the system as we currently understand it. These are specified without regard to any specific user interface that might be developed over the life of the project. • We have reviewed our list of use cases with our users. • All requirements are covered by the use cases. • No use case introduces a new requirement. • We have summarized our findings with a description of the mental model of the users. • (Optional) We have performed a functional assignment. LO-FIDELITY TO HI-FIDELITY SYSTEMS The common approach to building user interfaces is based on taking the requirements, building a prototype, and then modifying it based on user feedback to produce a really usable system. If you are lucky, and your users like your initial prototype, you are doing OK. More often than not, your users will want to change the initial prototype more than your schedule, motivation, or skills will allow. The result is increased frustration because 470

Usability: Happier Users Mean Greater Profits users are asked to provide feedback into a process that is designed to reject it. A more effective process is to start with lo-fidelity (lo-fi), paper-and-pencil prototypes, testing and developing these with users. Once you have a good sense of where to head, based on a lo-fi design, you can move to a hifidelity, fully functional system secure in the knowledge that you are going to produce a usable result. Lo-Fi Design A lo-fi design is a low-tech description of the proposed user interface. For a GUI, it is a paper-and-pencil prototype. For a VUI (voice-based user interface), it a script that contains prompts and expected responses. The remainder of this section assumes that you are creating a GUI; you will find that you can easily extend these techniques to other user interfaces. Specific objectives of a lo-fi design include: • Clarifying overall application flow • Ensuring that each user interaction contains the appropriate information • Establishing the overall content and layout of the application • Ensuring that each use case and requirement is properly handled The basic activities, as outlined in Exhibit 3, consist of developing a storyboard, creating a lo-fidelity prototype, and beginning the testing process by testing the prototype with representative users. Among the inputs to this activity are the conceptual model (perhaps created through a process like the Rational Unified Process) or an information model (an entity-relationship model for traditional systems or a class diagram for object-oriented systems) and design guidelines. Capturing the System Model: The Role of Metaphor The system model is the model the designer creates to represent the capabilities of the system under development. The system model is analogous to the mental model of the user. As the users use the system to perform tasks, they will modify their current mental model or form a new one based on the terminology and operations they encounter when using the system. Usability is significantly enhanced when the system model supports existing mental models. To illustrate, your mental model of an airport enables you to predict where to find ticket counters and baggage claim areas when you arrive at a new airport. If you were building a system to support flight operations, you would want to organize the system model around such concepts as baggage handing and claim areas. 471

PROVIDING APPLICATION SOLUTIONS

/R)L3URWRW\SH'HVLJQ 0HWDSKRU(YDOXDWLRQ 0HQWDO0RGHO 7DVN$QDO\VLV

,QIRUPDWLRQ0RGHO

6WRU\ERDUG

/R)L3URWRW\SH

'HVLJQ*XLGHOLQHV 6LPXODWHG3URWRW\SH

Exhibit 3.

Lo-Fi Prototype Design

A metaphor is a communication device that helps us understand one thing in terms of another. Usability is enhanced when the system model is communicated through a metaphor that matches the users’ mental model. For example, the familiar desktop metaphor popularized by the Macintosh and copied in the Windows user interface organizes operations with files and folders as a metaphorical desktop. The system model of files and folders meshes with the mental model through the use of the interface. Another example is data entry dialogs based on their paper counterparts. In the design process, the designer should use the concept of a metaphor as a way of exploring effective ways to communicate the system model and as a means of effectively supporting the users’ mental model. Resist blindly adhering to a metaphor, as this can impede the users’ ability to complete important tasks. For example, although the desktop metaphor enables me to manage my files and folders effectively, there is no effective metaphorical counterpart that supports running critical utility software such as disk defragmentation or hard disk partitioning. A paper form may provide inspiration as a metaphor in a data entry system, but it is inappropriate to restrict the designer of a computer application to the inherent limitations of paper. Storyboarding A storyboard is a way of showing the overall navigation logic and functional purpose of each window in the GUI. It shows how each task identified 472

Usability: Happier Users Mean Greater Profits

6XSHU0DLO 7KLVLVWKHPDLQXVHU LQWHUIDFHIRU6XSHU0DLO ,WSUHVHQWVDPHQXRI RSHUDWLRQV

0RGDO'LDORJ

8VHUVHOHFWV&UHDWH0HVVDJH

&UHDWH0HVVDJH 8VHUHQWHUVWH[WRI PHVVDJHDQG UHFLSLHQWV

$GGUHVV/RRNXS 8VHUFDQVHOHFW DGGUHVVHHVIURPDOLVW

Exhibit 4.

Simple Storyboard for a Mail System

in the task analysis and described in the use cases can be accomplished in the system. It also shows primary and secondary window dependencies, and makes explicit the interaction between different dialogs in the user interface. (A primary window is a main application window. A secondary window is a window such as a dialog). The storyboard often expands on the system model through the metaphor and clarifies the designers’ understanding of the users’ mental model. An example of a simple storyboard for a mail system is shown in Exhibit 4. The example shows the name of each window, along with a brief description of the window contents. A solid line means the users’ selection will open a primary window, while a dashed line indicates the opening of a modal dialog. The notation used for storyboards should be as simple as possible. For example, in the earliest phases of system design, using a simple sheet of paper with Post-It notes representing each window is an effective way to organize the system. Storyboards have an additional benefit in that they can show the overall “gestalt” of the system. The author of this chapter has seen storyboards as large as three by six feet, packed with information yet entirely understandable. This storyboard provided the development staff with a powerful means of making certain that the overall application was consistent. The storyboard also enabled the project manager to distribute the detailed window design to specific developers in a sensible way, as the common relationships between windows were easy to identify. The development of the storyboard should be guided by the use cases. Specifically, it should be possible to map each operation described in a use 473

PROVIDING APPLICATION SOLUTIONS case to one or more of the windows displayed in the storyboard. The mapping of specific actions to user interface widgets will come at a later stage in the development process. Lo-Fi Window Design Following the storyboard, the design process proceeds to the development of the initial lo-fi window designs. During this phase, the designer takes paper, pencil, and many erasers and prepares preliminary versions of the most important windows described in the storyboard. A good starting point for selecting candidates for lo-fi design includes any windows associated with important or high-priority use cases. One critical decision point in lo-fi window design is determining the information that should be displayed to the user. The basic rule-of-thumb is to only display the information needed to complete the task. By mapping use cases to the data model, you can usually identify the smallest amount of data required to complete a task. Alternatively, you can simply show your storyboard to your users and add the detail they think is required. The resultant information can be compared with data models to ensure that all data has been captured. Creating a lo-fi window design is fun. Freed from the constraints of the control palette associated with their favorite IDE (integrated development environment), designers tend to concentrate on design and user needs instead of implementation details. The fun part of the design process includes the tools used to create lo-fi designs. The following items should be easily accessible for lo-fi window design: • • • • • • • •

Scissors Glue Clear and colored overhead transparencies White correction paper A computer with a screen capture program and a printer Clear tape “Whiteout” A photocopier

A computer and a printer are included on this list because it is often more practical to print standard widgets and screens, such as corporate-defined standards for buttons or the standard file open dialog provided by the operating system, than try to draw them by end. Once printed, these can be glued, taped, or otherwise used in the lo-fi design. This does mean that a lo-fi design is a mixture of hand-drawn and computer-generated graphics. In practice, this mixture does not result in any problems. 474

Usability: Happier Users Mean Greater Profits While the practical use of a computer during lo-fi design is acceptable in an appropriate role, it is important to realize that developers should not be attempting to create lo-fi designs on a computer. Doing so defeats many of the fundamental goals of lo-fi design. Paper-and-pencil designs are more amenable to change and are often created faster than similar designs created in an IDE. More importantly, designers who create their designs in a computer are less likely to change them, primarily because the amount of effort put into a computer design increases the designers’ psychological attachment to the design. This increased attachment means a greater reluctance to change it, which defeats the purpose of the design. A final reason to use lo-fi prototypes is that designers who create their initial designs on a computer tend to worry about how they will make these designs “work.” Specifically, they start worrying about how to connect the user interface to the business logic, or how to format data to meet the needs of the user. The result is premature emphasis on making things work rather than exploring design alternatives. Checklist • We have created a storyboard that details the overall navigation logic of the application. • We have traced each use case through the storyboard. • We have created a data (or object) model that describes the primary sources of information to be displayed to the users and the relationships among these items. • We have transformed our storyboards into a set of lo-fi prototypes. • The lo-fi prototypes were created using paper and pencil. • All information displayed in the lo-fi prototype can be obtained from the entity-relationship or object model or from some other well-known source. DESIGN PRINCIPLES While usability testing is the only way to be certain that the user interface is usable, there are several well-known and validated principles of user interface design that guide the decisions made by good designers. These principles are platform and operating system independent, and they are applicable in almost any environment. Adherence to these principles is becoming increasingly important in the era of Web development, as there are no universal standards for designing Web-based applications. This section presents a consolidated list of design principles that have stood the test of time, drawing heavily from the design principles published by Apple Computer Corporation and user interface researcher Jakob Nielson, a cofounder of the Nielson-Norman consulting company (see Exhibit 5). 475

PROVIDING APPLICATION SOLUTIONS Checklist • Each developer has been given a copy of the design principles. • Each developer has easy access to a copy of the platform standards. Ideally, each developer is given a copy of the platform standards and the time to learn them. • Each error situation has been carefully examined to determine if the error can be removed from the system with more effective engineering. • Management is prepared to properly collect and manage the results of the usability inspection. SIMULATED PROTOTYPING There are many kinds of testing systems in software development: performance, stress, user acceptance, etc. Usability testing refers to testing activities conducted to ensure the usability of the entire system. This includes the user interface and supporting documentation, and in advanced applications can include the help system and even the technical support operation. A specific goal of lo-fi prototyping is to enable the designer to begin usability testing as early as possible in the overall design process through a technique called simulated prototyping. Simulated prototyping means that the operation of the lo-fi system is simulated. Quite literally, a representative user attempts to complete assigned tasks using the prototype with a human playing the role of the computer. Before describing how to conduct a simulated prototype test, let us first explore what results the test should produce. A simulated prototyping session should produce a report that includes the following three items. First, it must be clearly identified for tracking purposes. Second, it must identify all individuals associated with the test. Participants are not identified by name, but by an anonymous tracking number. Referring to the users involved with the test as participants rather than subjects encourages an open and friendly atmosphere and a free-flowing exchange of ideas. The goal is to keep participants as comfortable as possible. Third, and most importantly, it must clearly summarize the results of the test. It is common to see test results concentrating on the negative responses associated with the prototype, but designers should also be looking for the positive responses exhibited by the user. This will enable them to retain the good ideas as the prototype undergoes revision. Unlike a source code review report, the results of the simulated prototype can provide solutions to problems identified during testing. 476

Usability: Happier Users Mean Greater Profits Exhibit 5. Consolidated List of Design Principles Principle

Description

Use concrete metaphors

Concrete metaphors are used to make the application clear and understandable to the user. Use audio, visual, and graphic effects to support the metaphor. Avoid any gratuitous effects; prefer aesthetically sleek interfaces to those that are adorned with useless clutter.

Be consistent

Effective applications are both consistent within themselves and with one another. There are several kinds of consistency that are important: The first is platform consistency, which means the application should adhere to the platform standards on which it was developed. For example, Windows specifies the exact distance between dialog buttons and the edge of the window, and designs should adhere to these standards. Each developer associated with the design of the user interface should be given a copy of the relevant platform standards published by each vendor. This will ensure that the application is platform compliant from the earliest stages of development. The second is application consistency, which means that all of the applications developed within a company should follow the same general model of interaction. This second form of consistency can be harder to achieve as it requires the interaction and communication between all of the development organizations within a company. A third kind of consistency is task consistency. Similar tasks should be performed through similar sequences of actions.

Provide feedback

Let users know what effect their actions have on the system. Common forms of feedback include changing the cursor, displaying a percentdone progress indicator, and dialogs indicating when the system changes state in a significant manner. Make certain the kind of feedback is appropriate for the task.

Prevent errors

Whenever a designer begins to write an error message, he should ask: Can this error be prevented, detected and fixed, or avoided altogether? If the answer to any of these questions is yes, additional engineering effort should be expended to prevent the error.

Provide corrective advice

There are many times when the system cannot prevent an error (e.g., a printer runs out of paper). Good error messages let the user know what the problem is and how to correct it (“The printer is out of paper. Add paper to continue printing”).

Put the user in control

Usable applications minimize the amount of time that they spend controlling user behavior. Let users choose how they perform their tasks whenever possible. If you feel that a certain action might be risky, alert users to this possibility, but do not prevent them from doing it unless absolutely necessary.

Use a simple and natural dialog

Simple means no irrelevant or rarely used information. Natural means an order that matches the task.

477

PROVIDING APPLICATION SOLUTIONS Exhibit 5. Consolidated List of Design Principles (continued) Principle

Description

Speak the users’ language

Use words and concepts that match in meaning and intent the users’ mental model. Do not use system-specific engineering terms. When presenting information, use an appropriate tone and style. For example, a dialog written for a children’s game would not use the same style as a dialog written for an assembly-line worker.

Minimize user memory load

Do not make users remember things from one action to the next by making certain each screen retains enough information to support the task of the user. (I refer to this as the “scrap of paper” test. If the user ever needs to write a critical piece of information on a piece of paper while completing a task, the system has exceeded memory capacity.)

Provide shortcuts

Shortcuts can help experienced users avoid lengthy dialogs and informational messages that they do not need. Examples of shortcuts include keyboard accelerators in menus and dialogs. More sophisticated examples include command-based searching languages. Novice users can use a simple interface, while experienced users can use the more advanced features afforded by the query language.

Conducting the Test A simulated prototype is most effective when conducted by a structured team consisting of between three and five developers. The roles and responsibilities of each developer associated with a simulated prototype are described in Exhibit 6 (developers should rotate roles between successive tests, as playing any single role for too long can be overly demanding). Selecting users for the simulated prototype must be done rather carefully. It may be easy to simply grab the next developer down the hall, or bribe the security guard with a bagel to come and look at the user interface. However, unless the development effort is focused on building a CASE tool or a security monitoring system, the development team has the wrong person. More specifically, users selected for the test must be representative of the target population. If the system is for nurses, test with nurses. If the system is for data entry personnel, test with data entry personnel. Avoid testing with friends or co-workers unless developers are practicing “playing” computer. It is critically important that designers be given the opportunity to practice playing computer. At first, developers try to simulate the operation of the user interface at the same speed as the computer. Once they realize this is impossible, they become skilled at smoothly simulating the operation of the user interface and provide a quite realistic experience for the participant. 478

Usability: Happier Users Mean Greater Profits Exhibit 6. Developer Roles and Responsibilities with a Simulated Prototype Role

Responsibilities

Leader

• Organizes and coordinates the entire testing effort • Responsible for the overall quality of the review (i.e., a good review of a poor user interface produces a report detailing exactly what is wrong) • Ensures that the review report is prepared in a timely manner

Greeter

• Greets people, explains test, handles any forms associated with test

Facilitator

• • • •

Computer

• Simulates the operation of the interface by physically manipulating the objects representing the interface. Thus, the “computer” rearranges windows, presents dialogs, simulates typing, etc. • Must know application logic

Observer

• Takes notes on 3v5 cards, one note per card

Runs test — the only person allowed to speak Performs three essential functions: Gives the user instructions Encourages users to “think aloud” during the test so observers can record users’ reactions to the user interface • Makes certain test is finished on time

During the simulated prototype, make certain developers have all of their lo-fi prototyping tools easily available. A lot of clear transparency is necessary, as they will place this over screens to simulate input from the user. Moreover, having the tools available means they will be able to make the slightest on-the-fly modifications that can dramatically improve the quality of the user interface, even while the “computer” is running. The test is run by explaining the goals of the test to the participants, preparing them, and having them attempt to accomplish one or more tasks identified in the task analysis. While this is happening, the observer(s) carefully watch the participants for any signs of confusion, misunderstanding, or an inability to complete the requested task. Each such problem is noted on a 3v5 card for further discussion once the test is completed. During the simulated prototype, designers will often want to “help” participants by giving them hints or making suggestions. Do not do this, as it will make the results of the test meaningless. The entire test — preparing to run the test, greeting the users and running the test, and discussing the results — should take about two hours. Thus, with discipline and practice, an experienced team can actually run up to four tests per day. In practice, it is better to plan on running two tests per day so that the development team can make critical modifications to the user interface between tests. In general, three to eight tests give

479

PROVIDING APPLICATION SOLUTIONS enough data to know if the development effort is ready to proceed to the next phase in the overall development process. Finally, one final word on selecting and managing participants. Remember that they are helping create a better system. The system is being tested, not the participants. Participants must feel completely free to stop the test at any time should they feel any discomfort. Checklist • We have prepared a set of tasks for simulated prototype testing. • A set of participants who match the target user population have been identified. • Any required legal paperwork has been signed by the participants. • Our lo-fi prototype has been reviewed — we think it supports these tasks. • The simulated prototyping team members have practiced their roles. • The “computer” has simulated the operation of the system. • We have responded to the review report and made the necessary corrections to the user interface. We have scheduled a subsequent test of these modifications. HI-FI DESIGN AND TESTING Once simulated prototyping has validated the lo-fi prototype, the design process moves into the last stage before implementation: the creation of the hi-fidelity (hi-fi) prototype. This is an optional step in the overall process and can be safely skipped in many circumstances. Skipping this step results in taking the lo-fi prototype and simply implementing it without further testing or feedback from users. Motivations for creating and testing a hi-fi prototype include ensuring that there is sufficient screen real estate to display the information identified in the lo-fi prototype, checking detailed art or graphics files, and making certain that the constraints of the delivery platform do not invalidate prior design decisions. Hi-fi prototypes are required when the design team has created a customized component, such as a special widget to represent a unique object. These must be tested to ensure they will work as desired. A hi-fi prototype allows developers to enhance presentation and aesthetics through the use of fonts, color, grouping, and whitespace. Doing this most effectively requires experience with graphic design, a rich topic beyond the scope of this chapter. However, graphic design details substantially contribute to overall feelings of aesthetic enjoyment and satisfaction, and should be considered an essential activity in the overall development effort. Unless the development team has solid graphic design experience, 480

Usability: Happier Users Mean Greater Profits the choice of fonts should be kept simple, using predominantly black on white text and avoiding the use of graphics as adornments. If you are taking the time to build a hi-fi prototype before final implementation, test it with a few users. Doing so will help clarify issues that are difficult or impossible to test in a lo-fi design, such as when a button should be enabled or disabled based on prior user input, response times for key system operations, or the operation of custom controls. While the results of a hi-fi prototype test are the same as a lo-fi test, the process is substantially different. First, the test environment is different. It is typically more formal, with tests conducted within a usability lab, a special room with the equipment necessary to conduct and run the test. Second, the nature of the tasks being tested means that the structure of the test is different. For example, lo-fi prototypes are most effective at determining if the overall design created by the development team will be effective. Specifically, the lo-fi test should have helped determine the overall structure of the user interface: the arrangement and content of menus, windows, and the core interactions between them. When the lo-fi testing is complete, the conceptual structure of the interface should be well-understood and agreed upon with the users. The hi-fi test, on the other hand, should be organized around testing one or more concrete performance variables. The real managerial impact of hi-fi testing is twofold. First, there is question of finding the right individuals to conduct the test. Does the team have access to individuals who can properly conduct a hi-fi test? Most development teams do not. While most developers can quickly and easily learn how to run an effective lo-fi test, conducting a properly structured hi-fi test requires significantly more training. The second, and far more important question, is this: What is going to do be done with the results of the test? Like lo-fi test results, the results of a hi-fi test must be evaluated to determine what, if any, modifications are needed in the user interface. The problem is that modifying a hi-fi prototype takes a substantial amount of design and coding, and, as discussed earlier, the likelihood of substantially changing prior design decisions decreases as the effort invested in creating them increases. Do not test a hifi prototype if you are not willing to change it. Checklist • • • •

Our hi-fi test is measuring a specific performance variable. We have identified a qualified human factors specialist for hi-fi testing. We have prepared precise definitions of test requirements. We have secured the use of an appropriately equipped usability lab. 481

PROVIDING APPLICATION SOLUTIONS CONCLUSION The first main conclusion of this chapter deals with process. Creating usable systems is much more than following a series of checklists or arbitrary tasks. Ultimately, the process of creating usable systems is based on working to understand your users and performing a number of activities to meet their needs. These activities, such as user and task analysis, lo-fi design, and simulated prototyping, must all be performed, keeping in mind the primary objective of creating a usable system. The second main conclusion of this chapter concerns motivation. There are several motivations for creating usable systems, the most important of which must be the goal to create satisfied customers. Satisfied users are satisfied customers, however you might define customer, and satisfied customers are the foundation of a profitable enterprise. Given the correlation between usability and profitability, it is imperative that senior management takes usability seriously.

482

Chapter 39

UML: The Good, the Bad, and the Ugly John Erickson Keng Siau

OBJECT ORIENTATION AND THE EMERGENCE OF UML Introduction The proliferation and development of information systems has proceeded at a pace amazing to even those intimately involved in the creation of such systems. It appears, however, that software engineering has not kept pace with the advances in hardware and general technological capabilities. In this maelstrom of technological change, systems development has traditionally followed the general ADCT (Analyze, Design, Code, Test) rubric, and utilized such specific methodologies as the Waterfall method, the Spiral method, the System Life Cycle (alternatively known as the System Development Life Cycle, or SDLC), Prototyping, Rapid Application Development (RAD), Joint Application Development (JAD), end-user development, outsourcing in various forms, or buying predesigned software from vendors (e.g., SAP, J.D. Edwards, Oracle, PeopleSoft, Baan). In general, systems and software development methods do not require that developers adhere to a specific approach to building systems; and while this may be beneficial in that it allows developers the freedom to choose a method that they are most comfortable with and knowledgeable about, such an open-ended approach can constrain the system in unexpected ways. For example, systems developed using one of the above triedand-not-so-true approaches (judging from the relatively high 66 to 75 percent failure rate of systems development projects) generally do not provide even close to all of the user-required functionalities in the completed system. Sieber et al.1 stated that an ERP implementation provided only 60 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

483

PROVIDING APPLICATION SOLUTIONS to 80 percent of the functionality specified in the requirements, and that was “merely” an implementation of a supposedly predeveloped application package. Thus, a different approach to systems development, one that provides close integration between analysis, design, and coding, would appear to be necessary. This chapter explores the role of the Unified Modeling Language (UML) as a modeling language that enables such an approach. This chapter starts by exploring the concept of object orientation, including object-oriented systems analysis and design, the idea of modeling and modeling languages, and the history of UML. It continues by covering the basic UML constructs and examining UML from a practitioner perspective. The chapter ends with a discussion of the future of UML and integrative closing comments. Object Orientation Over the past 15 to 20 years, object-oriented programming languages have emerged as the approach that many developers prefer to use during the Coding part of the ADCT cycle. However, in most cases, the Analysis and Design steps have continued to proceed in the traditional style. This has often created tension because traditional analysis and design are processoriented instead of being object-oriented. Object-oriented systems analysis and design (OOSAD) methods were developed to close the gap between the different stages, the first methods appearing in the 1980s. By the early 1990s, a virtual explosion in the number of OOSAD approaches began to flood the new paradigmatic environment. Between 1989 and 1994, the number of OO development methods grew from around 10 to more than 50.2 Two of these modeling languages are of particular interest for the purposes of this chapter: Booch and Jacobson’s OOSE (Object-Oriented Software Engineering), and Rumbaugh’s OMT (Object Modeling Technique).2 A partial listing of methods and languages is shown in Exhibit 1. The Emergence of UML Prominent developers of different object-oriented modeling approaches joined forces to create UML, which was originally based on the two distinct OO modeling languages mentioned above: OOSE and OMT. Development began in 1994 and continued through 1996, culminating in the January 1997 release of UML version 1.0. 2 The Object Management Group (OMG) adopted UML 1.1 as a standard modeling language in November 1997. Version 1.4 is the most current release, and UML 2.0 is currently under development. 484

UML: The Good, the Bad, and the Ugly Exhibit 1. • • • • • • • • • • • • • • • •

Sample Methods and Languages

Bailin Berard Booch Coad-Yourdon Colbert Embley Firesmith Gibson Hood Jacobson Martin-Odell Rumbaugh Schlaer-Mellor Seidewitz UML Wirfs-Brock

CURRENT UML MODELS AND EXTENSIBILITY MECHANISMS Modeling UML, as its name implies, is really all about creating models of software systems. Models are an abstraction of reality, meaning that we cannot model the complete reality, simply because of the complexity that such models would entail. Without abstraction, models would consume far more resources than any benefit gained from their construction. For the purposes of this chapter, a model constitutes a view into the system. UML originally proposed a set of nine distinct modeling techniques representing nine different models or views of the system. The techniques can be separated into structural (static) and behavioral (dynamic) views of the system. UML 1.4, the latest version of the modeling language, introduced three additional diagram types for model management. Structural Diagrams. Class diagrams, object diagrams, component diagrams, and deployment diagrams comprise the static models of UML. Static models represent snapshots of the system at a given point or points in time, and do not relate information about how the system achieved the condition or state that it is in at each snapshot.

Class diagrams (see an example in Exhibit 2) represent the basis of the OO paradigm to many adherents and depict class models. Class diagrams specify the system from both an analysis and design perspective. They depict what the system can do (analysis), and provide a blueprint showing how the system will be built (design).3 Class diagrams are self-describing 485

PROVIDING APPLICATION SOLUTIONS

Interface create FillRFQform() FillcustomerInformation() RFQToContractor() QueryLabor() RFQToSubContractor() AddendumtoCustomer() AddendumtoContractor() QuoteToContractor() opname() 1

RFQ number customerID description date total_amount bonding_requirement overhead tax

addSite() updateFromAddendum() calculateQuotation() 1 associated with

Addendum RFQ_number Date Description

handler 0..*

RecordCustomerInfoInDatabase() RequestQuoteFromVendor() RequestLaborQuote() 1 CalculateQuote() MergeAddendumToRFQ() SendQuoteToCustomer() GetSiteInfo() 1 +Quote Requester

1

1

UpdateRFQ() AddUpdateDetail() UpdateSite()

SiteLine Composed of Amount Number Description 1..* 1

Contains

1..* Site site_number site_name site_city site_state site_country total_site_amount AddSiteInfo() UpdateSiteInfo() AddSiteLine() UpdateSiteLine()

Exhibit 2.

Contractor I Name Address Phone Email

Request quote

+Parts/Service Provider 0..* SubContractor I Name Address Phone Email ReturnQuote() MergeAddendumToRFQ() SendQuoteToContractor() opname()

Class Diagram

and include a listing of the attributes, behaviors, and responsibilities of the system classes. Properly detailed class diagrams can be directly translated into physical (program code) form. In addition, correctly developed class diagrams can guide the software engineering process, as well as provide detailed system documentation.4 Object models and diagrams represent specific occurrences or instances of class diagrams, and as such are generally seen as more concrete than the more abstract class diagrams. Component diagrams depict the different parts of the software that constitute a system. This would include the interfaces of and between the components as well as their interrelationships. Ambler3 and Booch, Rumbaugh, and Jacobson2 defined component diagrams as class diagrams at a more abstract level. Deployment diagrams can also be seen as a special case of class diagrams. In this case, the diagram models how the runtime processing units are connected and work together. The primary difference between compo486

UML: The Good, the Bad, and the Ugly

customer registration <> analyze RFQ <>

Send RFQ

<>

calculate labor Customer

contractor

issue addendum issue sub-RFQ to provider <> <> issue sub-addendum

analyze sub-RFQ <>

send question to contractor provider <> Quotation back to contractor

calculate total <>

send quotation to customer

Exhibit 3. Use Case Diagram

nent and deployment diagrams is that component diagrams focus on software units while deployment diagrams depict the hardware arrangement for the proposed system. Behavioral Diagrams. Use case diagrams, sequence diagrams, collaboration diagrams, activity diagrams, and state chart diagrams make up the dynamic models of UML. In contrast to static diagrams, dynamic diagrams in UML are intended to depict the behavior of the system as it transitions between states, interacts with users or internal classes and objects, and moves through the various activities that it is designed to accomplish.

While class models and diagrams represent the basis of OO as previously discussed, use case models (see Exhibit 3) and diagrams portray the system from the perspective of an end user, and represent tasks that the system and users must execute in the performance of their jobs.5 Use case 487

PROVIDING APPLICATION SOLUTIONS contractor analyzes RFQ contractor

RFQ

questions and options

contractor analyzes RFQ

get question or option send questions and options formulate questions

return indicator changed

return answer(s) to question

Exhibit 4. Sequence Diagram

models and the resulting use case diagrams consist of actors (those persons or systems outside the system of interest who need to interact with the system under development), use cases, and relationships among the actors and use cases. Booch, Rumbaugh, and Jacobson2 proposed that developers begin the Analysis process with use cases. By that they suggest that developers should begin the Analysis process by interviewing end users, perusing the basic legacy system documentation, etc., and creating from those interviews and documents the use cases that drive the class model development as well as the other models of the system. Sequence and collaboration diagrams portray and validate in detail the logical steps of use cases.3 Sequence diagrams (see Exhibit 4) depict the time ordering of messages between objects in the system and, as such, include lifelines for the objects involved in the sequence as well as the focus of control at points in time.2 Sequence diagrams are isomorphic with collaboration diagrams, meaning that one can be converted into the other 488

UML: The Good, the Bad, and the Ugly

2: Look up customer 4: Establish account

Customer

Contractor Quotation manager

3: Inactive account Customer 6: Send "No Bid" 8: Send clarification request 9: Send addendum/"no" addendum

1: Receive RFQ

10: Receive addendum message

5: Read RFQ/Determine Bid/"No Bid" 7: Determine clarification[s] necessity 11: Incorporate addendum

Addendum quotation

Exhibit 5.

Collaboration Diagram

with no loss of information. Collaboration diagrams (see Exhibit 5) focus more on the relationships between the objects involved in the sequence or collaboration. Collaboration diagrams focus on the connection between the objects, the path, and the sequence number, which indicates the time relevance of the involved action.2 Activity diagrams (see Exhibit 6) model the flow of control through activities in a system and, as such, are really just flowcharts. In addition, activity diagrams are special instances of statechart diagrams. Statechart diagrams (see Exhibit 7) model state machines. State machines model the transition between states within an object, and the signals or events that trigger or elicit the change in state from one value to another.2 For example, a change in air temperature triggers a thermostat to 489

PROVIDING APPLICATION SOLUTIONS Negotiation

Quotation

Send RFQ Send Question No Send Option Conduct Negotiation

Yes

Send Addendum

Agreement

Calculate labor cost

Send RFQ to supplier

Send to vendor

Return material cost

Send to subcontractor

Return labor cost

Send quotation to contractor

calculate total cost

Send quotation to customer

Exhibit 6.

490

Activity Diagram

RFQ sent

Determining Bid/No Bid

RFQ evaluated

Determining Clarification Necessity

clarification needed

Sending Clarification Request

clarification not needed

request sent

addendum sent

addendum not needed

"No Bid" Sent Waiting to Begin Bid Preparation

Receiving Clarification Request Response

Evaluating and Incorporating Addendum

addendum incorporated

bid preliminaries completed

Preparing Bid Accumulating Costs to Sitelines and Job Sites Determining Labor Content

labor needed

labor costs estimated

Estimating Labor

pricing completed cost distribution complete

labor pricing zero

bond needed

Completing Bid [Adding overhead, mark-ups and mobilization charges] bond pricing received

Requesting Bid Bond and Price

bid completed

bond price zero Sending to Customer

material price zero Determining Material Content

material needed

Requesting Material Price

bid sent material pricing received

subcontract price zero Determining Subcontract Content

subcontractor needed

Exhibit 7. Statechart Diagram

Sending Specs to Subs

subcontract price received pricing received

Receiving Prices from Subs

Receiving Award or Unsuccessful Notice

award/unsuccessful notice received

491

UML: The Good, the Bad, and the Ugly

Determining Bond Requirement

PROVIDING APPLICATION SOLUTIONS activate a heating or cooling system that regulates the temperature in a room or building. A rise in air temperature in this case would be sensed by the thermostat and would cause the cooling system to change states from inactive (or idle) to active and begin the cooling process. Once the ideal temperature is reached, the thermostat would sense that and trigger a state change in the cooling system back to inactive. Model Management Diagrams. As mentioned, UML version 1.4 introduced three new diagram types in the Model Management category — entitled packages, subsystems, and models — and all three types are used as mechanisms for categorizing model elements. A detailed discussion of model management diagrams is beyond the scope of this chapter.

Extensibility Mechanisms UML is intended to be a fully expressive modeling language. As such, it possesses a formal grammar, vocabulary, and syntax for expressing the necessary details of the system models through the nine diagramming techniques. Although UML represents a complete and formal modeling development language, there is no realistic way that it can suffice for all models across all systems. UML provides three ways to extend the language so that it can encompass situations that the standard language is not equipped to handle. While UML provides for four commonly used mechanisms 2 — Specifications, Adornments, Common Divisions, and Extensibility — we concern ourselves only with Extensibility for the purposes of this chapter. Stereotypes. In a basic sense, stereotypes simply add new “words” to UML’s vocabulary. Stereotypes are generally derivations from existing structures already found within UML, but yet are different enough to be specific to a particular context. Booch, Rumbaugh, and Jacobson2 used the example of modeling exceptions in C++ and Java as classes, with special types of attributes and behaviors. Tagged Values. Providing information regarding version numbers or releases is an example of tagged values in use.2 Tagged values can be added to any UML building block to provide clarity to developers and users during and after the development cycle. Constraints. Constraints are simply that — constraints. UML allows

developers to add constraints to systems that modify or extend rules to apply (or not apply) under conditions and triggers that might be exceptions to those rules. 492

UML: The Good, the Bad, and the Ugly AN EVALUATION OF UML UML: The Good The characteristics of UML described above have helped it gain broad acceptance and support among the developer community. The widespread adoption and use of UML as a primary, if not the standard, modeling language for OO systems development efforts can be seen as at least indirect evidence of the usability of the language in analysis, design, and implementation tasks. UML presents a standard way of modeling OO systems that enhances systems development efforts, and as you will see below, future enhancements to UML will provide even greater standardization and interoperability. UML also provides a vital and much-needed communication connection between users and designers by incorporating use case modeling and diagramming in its repertoire.6 UML can be used with a variety of development methodologies; it is not shackled to only one approach. This can only broaden its appeal and overall usefulness to developers in the industry. In a nutshell, UML has provided some vital and much-needed stability in the modeling arena, and the software development community as a whole can only benefit from that.7 Selic, Ramakers, and Kobryn8 propose that as information systems become evermore complex, modeling software for constructing understandable representations those systems will become correspondingly more important for developers in such complex and quickly changing environments. As such, UML is positioned to provide modeling support for developers; and if the pending UML 2.0 lives up to expectations, it should be able to provide viable systems modeling tools and support for at least several years into the future. UML: The Bad Use cases are more process than object-oriented. Thus, the use case-centric approach has been criticized because it takes a process-oriented rather than object-oriented view of a system. This is a point of controversy among researchers and developers alike. Dobing and Parsons9 went so far as to propose that, because use cases and resulting use case diagrams are process oriented, their role in object-oriented systems development should be questioned, and possibly removed from use in OO development methods for that very reason. However, because nearly all businesses are process-oriented, or at least are seen that way by most end users, it might be desirable, and even necessary, for developers to capture essential end-user requirements by 493

PROVIDING APPLICATION SOLUTIONS means of their process-based descriptions of the tasks that they and the system must perform. In addition, UML has been criticized for being too complex for mere mortals to understand or learn to use in a reasonable amount of time. Some research has been done with regard to complexity. Rossi and Brinkkemper’s study10 established a set of metrics for measuring diagram complexity, while Siau and Cao7 applied the metrics to UML and other modeling techniques. Their results indicated that although none of the individual UML diagrams are more complex than those used in other techniques, UML on a whole is much more complex. In addition, with version 2.0 pending, UML is not likely to become less complex in the future. Recent movement toward a component-based architecture has also been mentioned as a gradually increasing limitation of UML.11,12 Kobryn11 goes on to detail the extent of the problem involving patches in UML (current version) that attempt support for J2EE’s Enterprise JavaBeans and .NET’s COM+. UML 2.0 will incorporate built-in rather than patched extensions for at least these developments in the OO paradigm. However, Duddy13 takes a more pessimistic view of both current and future versions of UML. He believes that although UML 2.0 will provide coverage for development tools and paradigms currently in use, there is no way that it can provide support for emerging application development approaches or tools, such as application servers, loosely coupled messaging, and Web services. To be fair, however, it is also somewhat unrealistic to expect any tool to be a panacea for whatever methodologies, approaches, or paradigms capture the attention of developers at any given point in time. Finally, with the appearance of aspect-oriented programming, it is entirely possible that the entire OO approach could be supplanted with a new paradigm. If that were to happen, it might become problematic in that UML is constrained by its limited extensibility mechanisms. That is, we must ask whether or not UML is sufficiently robust to adapt or be adapted to a radically new paradigm. UML: The Ugly The complexity of UML, as discussed above, means one of two things: companies considering the use of UML will either (1) have to provide extensive (i.e., expensive) training if they plan to develop in-house, or (2) have to hire UML-trained (i.e., also expensive) consultants to carry out their systems development efforts. Either way, this indicates that UML-based systems development projects will probably not get any less expensive in the future. However, the same criticism could be leveled at most other modeling tools as well. 494

UML: The Good, the Bad, and the Ugly At least some projects developed with UML do not appear to fully utilize the modeling language, and this begs the following question: if most systems developers do not use many of the features and capabilities of UML, is it worthwhile to maintain in the language those capabilities and features that are rarely used? Only research into the issue will be able to answer that question. For more information regarding this issue, see Siau and Halpin.14 At the recent (November 2002) OOPSLA Conference in Seattle, Siau participated as a problem submitter and domain expert for the DesignFest component of the conference. This gave him an opportunity to informally interview a number of the participants about their current uses of UML. Some developers make extensive use of UML and maintain an archive and version control of models as they develop systems, while others simply use UML informally, as a drawing tool. Still other developers make little or no use of UML in their development efforts. None of this should be surprising, because no modeling tool will be adopted by everyone. However, the middle group, those who use UML informally or change the diagramming techniques as they feel necessary in the pursuit of their projects, raises some interesting questions. Do they not fully use the capabilities of UML because the tools are too expensive, because fully using UML is too complex, or for any of a variety of other reasons? While these questions and suppositions are based on anecdotal evidence, at least one highlights an issue with UML. Do developers feel that UML is too complex to use? If developers perceive the modeling language and the associated tools as difficult to use, it is possible that more effort is expended in understanding and using the tools than in the primary goal — that is, developing the system. Because of these concerns, evaluating and improving UML from the perspective of its usability and perceived complexity are very important. THE FUTURE AND NEW USES OF UML UML Version 2.0 The development of UML version 2.0 began in the fall of 2000, and continues to date. The upgrade has been separated into four areas, each proceeding more or less independently, including UML Infrastructure, UML Superstructure, Object Constraint Language, and UML Diagram Interchange. The OMG (Object Management Group), as the standards “keeper of the flame” for things OO, has solicited proposals from five different groups, of which three have been recently negotiating a single-version, collaborative effort for the final submitted revision proposal. 495

PROVIDING APPLICATION SOLUTIONS New Uses of UML Proponents of eXtreme Modeling and eXtreme Programming have begun to utilize the capabilities of UML in those approaches to systems development, while XMI (XML Metadata Interchange) allows developers to transfer a (standardized) UML model from one development tool into others, as necessary for system development efforts. A number of development tools on the market will allow developers to read source code from legacy systems into the tool and create reverse-engineered UML diagrams.15 Other tools will generate code or test suites if the UML modeling process has been thorough enough to allow that. Following that line of thought, Mellor has proposed16 that UML be restructured into a form that will render the models created early on in a systems development effort executable to program code as the development proceeds, and the models become more expressive of the actual system requirements. Others17 counter that while UML in such advanced format may be highly desirable, there appears to be little chance of it actually happening soon in light of the many competing proposals related to UML 2.0. Mellor’s 2U proposal16 includes the idea that the coming restructure (UML 2.0) incorporate a “small well-defined executable and translatable kernel” that would render at least some of the models executable. Duddy’s proposal13 is that UML be upgraded to include extension mechanisms that support the Common Warehouse Metamodel, Enterprise Distributed Object Computing standards, as well as Enterprise JavaBeans and Common Object Request Broker Architecture (CORBA). Inherent in Duddy’s proposal is that UML be extensible for a “family of languages,” including support of the Model Driven Architecture (MDA) that the OMG has adopted. CONCLUSION Neither UML nor any other modeling language, development method, or methodology has proven to be a panacea for the Analysis, Design, and Implementation of information systems. As suggested by Brooks,18 this is not surprising because the inherent, essential characteristics of software development make it a fundamentally complex activity. UML is not perfect but it integrates many important software engineering practices that are important enhancements to systems development, and it does so in a way that, if not clear to everyone, is at least enlightening to developers. Finally, looking back at the past 40 years of systems development chaos and woes, it appears that UML can be seen, problems notwithstanding, as one of the most important innovations in systems development since the advent of the structured approaches.

496

UML: The Good, the Bad, and the Ugly References 1. Sieber, T., Siau, K., Nah, F., and Sieber, M., “SAP Implementation at the University of Nebraska,” Journal of Information Technology Cases and Applications, 2(1), 41–72, 2000. 2. Booch, G., Rumbaugh, J., and Jacobson, I., 1999, The Unified Modeling Language User Guide, Addison-Wesley, Reading, MA, 1999. 3. Ambler, S., “How the UML Models Fit Together,” http://www.sdmagazine.com, 2000. 4. Lago, P., “Rendering Distributed Systems in UML,” in K. Siau and T. Halpin, Eds., Unified Modeling Language: Systems Analysis, Design, and Development Issues, Idea Group Publishing, Hershey, PA, 2000. 5. Pooley, R. and Stevens, P., Using UML: Software Engineering with Objects and Components, Addison-Wesley Longman Ltd., Harlow, England, 1999. 6. Fowler, M., 2000, “Why Use the UML?,” http://www.sdmagazine.com, 2000. 7. Siau, K. and Cao, Q., “Unified Modeling Language? A Complexity Analysis,” Journal of Database Management, 12(1), 26–34, January–March 2001. 8. Selic, B., Ramackers, G., and Kobryn, C., “Evolution, Not Revolution,” Communications of the ACM, 45(11), 70–72, November 2002. 9. Dobing, B. and Parsons, J., “Understanding the Role of Use Cases in UML: A Review and Research Agenda,” Journal of Database Management, 11(4), 28–36, 2000. 10. Rossi, M. and Brinkkemper, S., “Complexity Metrics for Systems Development Methods and Techniques,” Information Systems, 21(2), 209–227, 1996. 11. Kobryn, C., “What to Expect from UML 2.0,” SD Times, Accessed April 3, 2003, http://www.sdtimes.com. 12. Zhao, L. and Siau, K., “Component-Based Development Using UML,” Communications of the AIS, 9, 207–222, 2002. 13. Duddy, K., “UML2 Must Enable a Family of Languages,” Communications of the ACM, Vol. 45(11), 73–75, November 2002. 14. Siau, K. and Halpin, T., Unified Modeling Language: Systems Analysis, Design, and Development Issues, Idea Group Publishing, Hershey, PA, April 2001. 15. OMG (Object Management Group) Web site, http://www.omg.org. 16. Mellor, S., “Make Models Be Assets,” Communications of the ACM, 45(11), 76–78, November 2002. 17. Miller, J., “What UML Should Be,” Communications of the ACM, 45(11), 67–69, November 2002. 18. Brooks, F., “No Silver Bullet: Essence and Accidents of Software Engineering,” IEEE Computer, 20(4), 10–19, 1987.

497

This page intentionally left blank

Chapter 40

Use Case Modeling Donald R. Chand

A 1999 Standish Group research study1 showed that Corporate America spends more than $275 billion each year on approximately 200,000 application software development projects. The same study reported that, in 1998, only 26 percent of these projects were successful in terms of the project being completed on time and within budget with all features and functions originally specified, and 46 percent were cancelled before completion. This study also suggested that the critical factors for the failures of most projects are an incorrect understanding and improper modeling of systems requirements, insufficient user involvement, and weak or inexperienced project management. The use case approach appears to be one answer to these major causes for application software projects failures. In the use case approach, the user requirements are captured as use cases, each of which encapsulates a well-defined functionality or feature. This encapsulation allows the development team to better manage and track requirements and, more significantly, plan the development process. For example, the functionality specified in a use case can be validated with the user, and the use case implementation itself can be validated with respect to the use case specification. Furthermore, because a ranking of the use cases by the users captures the perceived importance of one set of requirements over another to the users, the use case implementation can be prioritized in terms of the functionality that the user needs most. On the other hand, because these use cases can also be ranked by the project manager in terms of their complexity and risk, management can balance the complexity, risk, and user importance to determine the order of implementation of the requirements. Because use cases also provide the backbone for developing the systems test plan and for performing acceptance testing, the use case approach is more effective in involving the users throughout the project. The use case approach was invented by Jacobsen,2 and it is becoming the preferred approach for requirements modeling and systems develop0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

499

PROVIDING APPLICATION SOLUTIONS ment. There are many reasons for the popularity of use cases for requirements modeling. First, use cases are built on very simple and understandable primitives that enable effective communication with the user but are at the same time precise enough for the analyst to correctly capture requirements and communicate them to the developers. Second, use cases are part of the Unified Modeling Language (UML), which is widely adopted as the industry standard for object-oriented systems engineering. Third, use cases provide a natural and integrated input to object modeling, interface specification, and object testing. Fourth, the evolution of Web-based, network-centric information technology has dramatically altered the functionality and complexity of traditional business applications, and compared to data flow diagrams, use cases are more effective in capturing the complex and dynamic requirements of Web-enabled applications. The objective of this chapter is to introduce the use case modeling primitives and bring out the issues that make use case modeling a rich and deep area of study. The approach taken here is illustrative. That is, a simple video rental application is used to introduce the key concepts and basic issues in use case modeling. SAMPLE APPLICATION The system requirements of a traditional video rental application focus on the operational needs, such as maintaining a database of members, processing rental and returns, maintaining an inventory of videos, searching the video inventory, browsing video and movie catalogs, and producing a variety of daily transactional reports, weekly and monthly summary reports, and exception-based management reports. The data flow technology and structured analysis and design methods have proven very effective in addressing such traditional pre-Web business requirements. A Webenabled vision of the traditional video rental application dramatically changes the system requirements. For example, from the user perspective, some of the requirements of the Web-enabled rental application include the following: • Apply for or renew membership. • Browse and search the catalog using different categories. • Select and reserve video titles for rent and home delivery, mail-order, or in-store pickup. • Return rented videos in store, or by delivery-pickup or mail order. • Check the status of a user account. • See previews of movies. • Receive and exercise coupons. This user perspective in a Web-enabled video rental application has an impact on the business requirements of the video rental system. In addi500

Use Case Modeling

8VH&DVH

$VVRFLDWLRQ

$FWRU

Exhibit 1.

UML Notation

tion to maintaining the membership database and the video inventory and processing rental and return transactions, a Web-based application system must enable the business to sell advertisements; provide value-added functionality such as special deals and coupons; maintain links to trivia, statistics, and related sites; maintain preview clips; and extend the daily status and exceptions reports with advanced usage and demographics reports. Thus, a Web-enabled video rental application is more complex than the typical traditional video rental application in terms of functionality and user interactivity. The use case approach appears to be more effective in comparison to the data flow approach in addressing the user-interactivity complexity of modern Web-enabled business applications. CORE ELEMENTS OF USE CASE TECHNOLOGY The actor, use case, and association are the three key modeling elements of the use case approach. Actors are a coherent set of roles that the users of the system play when interacting with the use cases in the system. A use case is a sequence of activities that a system can perform while interacting with the actors. An association models the participation of an actor in a use case. In UML notation, an actor is modeled as a named stick man, a use case is depicted as a named ellipse, and an undirected line connecting an actor to a use case represents the communication between an instance of that actor with instances of that use case. These three elements, shown in Exhibit 1, are used to build a use case model that captures the system behavior as it appears to outside users. We use the Web-enabled video rental application to illustrate how these elements are used in requirements modeling but will bind the scope to the following functions: • • • •

Membership creation and maintenance Logging in and logging out Searching for DVDs and videos Selecting items to rent 501

PROVIDING APPLICATION SOLUTIONS

%URZVHU

0HPEHU

%LOOLQJ &OHUN

6KLSSLQJ &OHUN

,QYHQWRU\ 6\VWHP

Exhibit 2. Sample Actors for Video Rental Application

• Generating shipment details • Generating billing details • Handling rental returns That is, some of the other important functions of Web-enabled video rental application, such as customer profiling and analysis, inventory management, credit card processing, purchasing, promotions, and video clips management, are outside the scope of this modeling illustration. Because use case models are composed of actors and use cases, the first step in requirements modeling is to identify and define the actors and use cases. One can identify the actors by asking questions such as who uses the system, who provides information to the system, who gets information from the system, what other systems provide information to this system, what other systems use this system, etc. Sample actors of our Web-based video rental application are shown in Exhibit 2. Because a use case describes the things that actors want the systems to do, you can go through all the actors and identify use cases for each by asking what functions the actor wants from the system, or what actors create or update the information maintained by the system. For example, if we assume that the Browser actor would like to become a member of the video rental entity, then Create Member is a potential use case. Similarly, if we analyze what the Member actor would want from the system, such as to log in, search the catalog, add to or delete items from the rental order, submit the rental order, check his or her account, and log out, we discover the Login, Browse Catalog, Select Item, Delete Item, Check Account, and Submit Rental Order use cases. Finally, because in our vision of the system, the Billing Clerk actor receives the bill details after the member submits a rental order and the Shipping Clerk actor receives the shipping details after the Billing Clerk confirms payment and changes the status of the rental order from pending to accepted order, we can invent an Accept Order use 502

Use Case Modeling

Exhibit 3.

%HFRPH 0HPEHU

/RJ,Q

/RJ2XW

%URZVH &DWDORJ

6HOHFW ,WHP

$FFHSW 2UGHU

6XEPLW 5HQWDO2UGHU

'HOHWH ,WHP

&KHFN $FFRXQW

Sample Use Cases for Video Rental Application

%HFRPH 0HPEHU %URZVHU

/RJ,Q

&KHFN $FFRXQW %URZVH &DWDORJ 6XEPLW 5HQWDO2UGHU

0HPEHU 6HOHFW ,WHP

'HOHWH ,WHP

,QYHQWRU\ 6\VWHP

/RJ2XW $FFHSW 2UGHU %LOOLQJ &OHUN

Exhibit 4.

6KLSSLQJ &OHUN

Use Case Diagram

case. Although this analysis is simplified, it illustrates the basic idea of how use case selection is based on an understanding of the business model and user requirements. The use cases for our simplified video rental application are shown in Exhibit 3. A high-level use case model is generated when the actors and use cases are documented in a use case diagram (see Exhibit 4). The use case diagram defines the boundary of the system by showing what is outside the system and what is inside the system. In this sense, the use case diagram is similar to the context diagram in the data flow-driven, structured analysis and design. 503

PROVIDING APPLICATION SOLUTIONS USE CASE DESCRIPTION The selection of actors and use cases is an iterative process. During the inception phase of an iterative systems development life cycle, such as the Rational Unified Process (RUP), the initial identification of actors and use cases, documented in a use case diagram, are used for risk analysis, systems size estimation, and project proposal. During the elaboration phase, the details of use cases are specified. All use case modeling approaches specify a basic template for describing a use case. The most common parts of a use case template are: Name, Goal, Description, Pre-condition, Postcondition, and Basic, Alternate, and Exceptional Flow of events. The Basic Flow area contains the most common system–actors interactions resulting from the primary scenarios. The Alternate Flow area captures the system–actors interactions of the secondary scenarios, and the Exceptional Flow area documents the procedures for handling errors. The Pre-condition section defines the state of the system that must be true before the use case can be executed; and the Post-condition section specifies the system state after the use case is executed. These parts are illustrated in Exhibit 5 for the Become Member use case.

OTHER USE CASE MODELING CONSTRUCTS Because the flow of events in a use case can grow in size and complexity, the and constructs are available to capture the relationships among use cases to control the size complexity. For example, in our use case model of the Web-based video rental application, we show a direct association between the Browser actor and the Become Member use case. However, in the description of the Become Member use case, the pre-condition is that the Browser actor has logged in. This suggests that the use case diagram in Exhibit 4 is not consistent with the description of the Become Member use case. The reason for this is that when we were developing the use case description, the customer revealed the requirement that the Browser be allowed to review the catalog, watch the video clips, and read the ads and promotions. To get these privileges, the Browser needs to register as a guest. This new understanding of the customer requirements can be modeled as a new Browser Log-in use case that is included in the Become Member use case and the Browse Catalog use case. Exhibit 6 shows how these relationships are depicted in the use case diagram. The relation is a factoring device in the sense that a common flow of events repeated in various use cases is factored out and maintained in a separate use case. The relationship is different because it captures event flows that extend the current requirements of a use case. For example, suppose that after the video rental system has been built, the 504

Use Case Modeling Exhibit 5.

Become Member Use Case

Use Case Name: Actors: Goal: Description:

Pre-condition:

Post-condition:

Become Member Browser Enroll the browser as a new member. The Browser will be asked to complete a membership form. After the Browser submits the application form, the system will validate it and then add the Browser to the membership file and generate a password that is e-mailed to the Browser. The systems is up and the Browser is logged in as a guest The Member password emailed to the Browser/ Member is logged

Basic Flow Actor action: 1.This use case begins when the Browser clicks the membership button

System response:

2. Display the membership form 3. Browser completes and submits the application 4. Check for errors 5. Check the membership database for prior membership 6. Create a password 7. Add new or updated member record to the membership database 8. Send an e-mail to the actor with the password The use case ends Alternate Flow

Prior membership handling 5.1. Update membership record 5.2. Inform the browser Continue from step 6 of Basic Flow

Exception Flow

Errors in membership application 4.1. Identify errors 4.2. Return errors to Browser 4.3. Browser corrects errors Continue from step 4 of Basic Flow

client now wishes to add a new capability to view video clips of the movie while browsing the catalog. This new requirement, shown in Exhibit 7, is 505

PROVIDING APPLICATION SOLUTIONS

%HFRPH0HPEHU LQFOXGH! %URZVHU/RJLQ LQFOXGH! %URZVHU

,QYHQWRU\ 6\VWHP

%URZVH&DWDORJ

Exhibit 6.

The Relationship

9LHZ9LGHR&OLS H[WHQG!

%URZVHU/RJLQ

LQFOXGH! %URZVHU %URZVH&DWDORJ

Exhibit 7.

,QYHQWRU\ 6\VWHP

The Relationship

captured by the View Video Clip use case extending the Browse Catalog use case. USE CASE MODELING ISSUES Although use case technology is an important addition to the requirements modeling toolset, using use cases does not guarantee good requirements. Use case modeling suffers from the same challenges as other approaches for requirements modeling, namely the understanding of the problem domain and correctly modeling the problem domain at the right level of abstraction. This section illustrates the types of problems and issues one encounters in use case modeling. Identification and Selection of Actors The concept of an actor appears to be quite straightforward because it is essentially a role that a business entity plays as it interacts with the system. However, in practice, the process of identifying and selecting actors can become confusing. For example, in a traditional video rental system, it 506

Use Case Modeling is the point-of-sale clerk who interacts with the system on behalf of the customer; and in the Web-enabled video rental system, the customer interacts with the system. Therefore, if you are building a video rental system that will be Web-enabled in the future, you face the issue of whether the customer is the actor, the point-of-sale clerk is the actor, or both the customer and the point-of-sale clerk are actors in this application. To handle such issues, the use case approach has been extended. Armour and Miller3 introduce the concept of a use case model of the business. They suggest creating a use case model for the business and using the business model to generate a use case model for the system. To distinguish the actors in the two models, they introduce the terminology of “business actors” and “system actors.” Therefore, in Armour and Miller’s use case modeling approach, the customer is a business actor and the point-of-sale clerk is a system actor. This opens up the issue of the relationship between the business-level use case model and the system-level use case model, and between business actors and system actors. It should be noted that, because use case models are used for identifying objects and building object models, a system model of the traditional video rental application without its associated business model will model the customer as a data object as opposed to a business object. This modeling decision will impact the modifiability of the system. For example, when you move to a Web-enabled video rental system where the customer has direct interaction with the system, the customer object’s methods such as log-in, log-out, search, preview, add video, delete video, etc. are different from the data object methods, namely get and save, when the customer is modeled as a data object. Cockburn4 also addresses this issue of business-level use cases modeling and system-level use case modeling. He introduces a stakeholders approach for use case modeling that begins by creating a conceptual use case model of the business and then refining it into a use case model of the system. In addition to searching business and systems actors, there is the issue of relationships between and among the actors. For example, to help identify the inheritance relationship among objects, the use case approach offers a generalization construct for the actors. In the Web-enabled video rental example, the Browser actor and the Member actor have “Search Catalog” and “Preview Video Clips” as common activities. However, the Member actor has other specialized activities such as “Rent Video” and “Check Account.” This means the Browser actor is a generalization of the Customer actor. The notational construct to capture this generalization relationship among actors is shown in Exhibit 8. Actor generalization adds 507

PROVIDING APPLICATION SOLUTIONS

%URZVH&DWDORJ DQG3UHYLHZ 9LGHR&OLSV &XVWRPHU 0DLQWDLQ&DWDORJ DQG9LGHR&OLSV

5HQWDQG 5HWXUQ9LGHRV 0HPEHU

&KDUJHDQG &UHGLW3D\PHQW

9LGHR 6WRUH

Exhibit 8. The Business-Level Use Case Model

richness and another level of complexity to the process of identifying, selecting, and defining actors. Finally, in traditional data flow-based structured analysis, there was no way to capture the concept of temporal events on the context diagram. In use case modeling, temporal events are modeled as actors, thereby expanding the definition and notion of an actor. Identification and Selection of Use Cases Because the functionality of the system is captured in the use cases, the essence of this modeling approach is the identification and selection of use cases. Although the actors are essential in use case modeling, their primary role is to help shape the functionality of the system by identifying use cases. It turns out that the art and complexity of use case modeling is in matching the use cases to the “right level” of requirements abstraction. The business-level use case modeling and system-level use case modeling, as discussed above, are one way of addressing this issue of the “right level of abstraction.” The more modern approaches for use case modeling3,4 begin by first creating a use case model of the business. A business-level use case model of a video rental business is shown in Exhibit 8. The generalization relationship between the Member and Customer allows the Member to do what the Customer can do. In this business model, the use cases are described in a very general way and only their names, purpose, and high-level descriptions are specified. The system-level use cases are discovered and invented by focusing on each of the interactions specified in the business-level model. For example, 508

Use Case Modeling

5HQWDQG 5HWXUQ9LGHRV LQFOXGH!

LQFOXGH!

0HPEHU

6HOHFW 'HOLYHU\ 2SWLRQ

6HOHFW ,WHP LQFOXGH! 'HOHWH ,WHP

Exhibit 9.

LQFOXGH! 5HWXUQ 9LGHRV

The Selection of System-Level Use Cases

the analysis of the interaction of the Member actor with the “Rent and Return Videos” use case may lead to the use cases shown in Exhibit 9. This is not a functional decomposition, but rather a way of discovering use cases. At the system level, the Video Store actor can be modeled as a generalization of the Inventory Clerk, the Shipping Clerk, and the Billing Clerk identified in the business use case model. Documentation of Use Cases The Unified Modeling Language (UML) is very rich and provides the vocabulary, notation, and rules to build and document all known software views, varying from the high-level architectural view that is composed of packages to the detailed logic view at the activity or state-change levels. The availability of precise notation and tools has led many software professionals to capture and document the description of the flow of events of use cases in UML activity diagrams instead of the textual interaction shown in Exhibit 5. The reason for this is the belief that these tools, like the activity diagram, are more precise in capturing the interactions between the actors and the system. Cockburn4 reminds us that the primary role of use case models is to capture the user requirements, and that textual documentation, as illustrated here, is easier for the user to relate to and is general enough to capture the essence of any complex interaction if the exceptions are properly identified and separately documented. SUMMARY AND CONCLUSION This chapter introduced, explained, and illustrated the key modeling constructs of use case modeling. It brought out the use case modeling issues 509

PROVIDING APPLICATION SOLUTIONS encountered in requirements modeling and how they are handled. The use case approach for modeling the application requirements begins with creating a use case model of the business and employing that business-level use case model to discover the system-level use case actors and use cases. Use cases have become an integral part of software development and software project management. For example, use cases are employed to estimate the size of the project, do risk analysis, prioritize the functionality provided in each development cycle, plan the schedule, develop test plans, monitor the schedule, and manage requirement changes. The use case model also provides a basis for designers’ and architects’ work, aids documentation writers in generating user guides, and helps maintainers understand how existing versions of the system work. Even in agile development methodologies, such as eXtreme Programming, where documentation is minimal, the development process is based on user stories that are essentially informal use cases. Those IS organizations that are lagging and have not yet adopted the use case approach for their systems analysis and design activities need to review their priorities and plan for implementing use cases in their organizations. As illustrated in this chapter, use cases are easy to relate to but their correct use requires a deep understanding of system modeling issues. Readers who are new to use case modeling will find the book by Schneider and Winters5 a very good starting point for their study. It will provide the use case modeling background needed to appreciate and work through the more advanced modeling issues discussed and elaborated in Armour and Miller3 and Cockburn.4 References 1. “CHAOS: A Recipe for Success,” Whitepaper, The Standish Group International, Inc., Copyright 1999. 2. Jacobson, I., Christenson, M., Jonsson, P., and Overgaard, G., Object-Oriented Software Engineering: A Use Case Driven Approach, Addison-Wesley, Reading, MA, 1992. 3. Armour, F. and Miller, G., Advanced Use Case Modeling, Addison-Wesley, Reading, MA, 2001. 4. Cockburn, A., Writing Effective Use Cases, Addison-Wesley, Reading, MA, 2001. 5. Schneider, G. and Winters, J., Applying Use Cases — A Practical Approach, Addison-Wesley, Reading, MA, 2000.

510

Chapter 41

Extreme Programming and Agile Software Development Methodologies Lowell Lindstrom Ron Jeffries

As a stakeholder of a software project, how does the following sound to you? You can have releases as often as you like. The small number of defects is unprecedented. All of the features in the system are the most valuable ones to your business. At anytime, you have access to complete, accurate information as the status of any feature and of the quality of the system as a whole. The team developing your project works in an energized space with constant communication about the project. You are not dependent on any one or even two programmers for the continued success of the project. If your needs change, the development team welcomes the change of direction. As a developer of a software project, how does the following sound to you? No one estimates your tasks but you, period! You always have access to a customer to clarify details about the features you are implementing. You are free (and required) to clean up the code whenever necessary. You complete a project every two weeks. You can work in any part of the system that you wish. You can pick who will help you on any given task. You are not required to constantly work long hours. Does this sound too good to be true? Teams are achieving these advantages using a relatively new set of software methodologies. Collectively, 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

511

PROVIDING APPLICATION SOLUTIONS they are referred to as Agile Software Development Methodologies. The most pervasive is Extreme Programming. This chapter introduces these exciting, popular, yet controversial new approaches to software development. BACKGROUND Trends in Software Development The last half-century has seen a dizzying progression of technical advancement in the areas of computer, software, and communications technology. With each advance came rapid changes in the way society works and lives. The impact of technology is increasingly pervasive. Even as the current economic downturn limits capital investment, innovators and entrepreneurs are pushing the limits in the areas of biotechnology and nanotechnology. Supporting all these advances is software running on some kind of computer. From the spreadsheets to anti-lock brakes to phones to toys, logic that was once implemented in mechanics, circuitry, or pencil and paper is now a set of instructions controlling one or many computers. These computers communicate with each other over networks ignorant to the limits of geography and even wires. The notion of a program running on a computer is largely a memory as the emergence of distributed computing models, most recently Web services, allows many different computers to participate in “running” a program or system to yield a given output. In this explosive environment, software professionals have the challenge to deliver a seemingly infinite backlog of software projects, while keeping abreast of the latest advances. The results are debatable at best. Survey after survey continues to confirm that most software projects fail against some measure of success. Most software developers have many more stories of failure than success. Given the friction and finger-pointing that accompanies the end of a failed project, it is difficult to research the causes of software failures. However, typically, projects fail for one or more of the following reasons: • • • • • • • • • 512

Requirements that are not clearly communicated Requirements that do not solve the business problem Requirements that change prior to the completion of the project Software (code) that has not been tested Software that has not been tested as the user will use it Software developed such that it is difficult to modify Software that is used for functions for which it was not intended Projects not staffed with the resources required in the project plan Schedule and scope commitments are made prior to fully understanding the requirements or the technical risks

Extreme Programming and Agile Software Development Methodologies Despite these delivery problems, strong demand seems to hold steady. Reports still suggest a shortage of software professionals and college graduates, despite the implosion of the Internet industry slowing the growth of demand and the emergence of offshore programming market increasing supply. Improvement Remains Elusive Efforts to improve the success rate of software projects have existed since the first bug was detected. Recently, these efforts have focused on four distinct parts of the software development process: requirements gathering, designing and developing the software, testing the results, and overall project management. Formal requirements definition and analysis addressed the problem of requirements that were incomplete or did not reflect the needs of the customer. Formal design before implementation addressed the goals of reuse, consistency of operation, and reducing rework. Testing caught defects before they reached the users. Project management addressed the problem of coordinating the efforts of multi-department teams. These areas of focus follow a logical approach to process improvement: improve the quality of the inputs (requirements), improve the quality of the output (designing and developing, project management), and improve the detection and elimination of defects prior to shipping (testing). Methods and tools focusing on these four distinct areas of software development have proliferated. For some projects, these efforts have been effective. For others, they yielded silos of responsibility, with poor communication between the groups and distributed ownership and accountability. If a project is late, it is easy for the programmers to blame the requirements gatherers, and vice versa. If the users detect defects, finger-pointing ensues between developers and testers. Project management techniques try to better coordinate the activities of the multiple groups involved in delivering the project, but add yet another silo and area of accountability. The most popular project management techniques focus on developing a plan and sticking to that plan. This improves coordination but reduces the ability of the project to adapt to new information regarding the requirements or the implementation details. The Emergence of Agile Methods The additional process steps, roles, and artifacts helped many teams to enjoy higher success rates and more satisfied customers. Unfortunately, many projects failed attempting to use the same techniques. Some projects got lost in the documents and never implemented any code, missing the window of opportunity for the software. Others did not leave enough time at the end for implementation and testing and delivered systems inconsis513

PROVIDING APPLICATION SOLUTIONS tent with the documents and designs on which most of the project time was spent. At the same time, numerous projects were very successful that did not follow methods with binders of documents, detailed designs, and project plans. Many experienced programmers were having great success without all these extra steps. The determining factor of project success seemed more and more to be the people on the project, not the technology or the methods that were being used. After all, people end up writing the software at some point. To some, the developers who did not embrace the new methodologies appeared to be undisciplined and indifferent to quality, despite their successes at delivering quality software that people wanted to use. A few people started to author papers about these disciplined, yet lighter approaches to software development. They called them Extreme Programming, SCRUM, Crystal, Adaptive, etc. Different authors emphasized different aspects of software development. Some focused on approaches to planning and requirements; some focused on ways to write software that could be changed more easily; and some focused on the people interactions that allow software developers to more easily adapt to their customers’ changing needs. These various efforts created a focal point for a community that furthered the set of practices that succeed without many of the activities and artifacts required by more defined methodologies. In the fall of 1999, Extreme Programming, Embrace Change1 was published and the trend had found its catalyst. In early 2001, the different innovators who were creating different agile methodologies held a retreat and scribed the “Agile Manifesto for Software Development.”2 By the spring of 2002, Computerworld.com ran the following headline: “More than twothirds of all corporate IT organizations will use some form of ‘agile’ software development process within 18 months, Giga Information Group predicted this week at its application development conference here.”3 Agile Methodologies At the retreat in early 2001, a number of leaders of the agile software development movement held a retreat to discuss their approaches and to explore the commonalities and differences. What emerged was the Agile Manifesto for Software Development. The manifesto (see Exhibit 1) articulates core values and principles that guide agile methodologies. EXTREME PROGRAMMING Extreme Programming (XP) is the most widely used agile methodology. XP shares the values espoused by the Agile Manifesto for Software Develop514

Extreme Programming and Agile Software Development Methodologies Exhibit 1. Agile Manifesto We are uncovering better ways of developing software by doing it and helping others to do it. Through this work we have come to value: Individuals and interactions over processes and tools Working software over comprehensive documentation Customer collaboration over contract negotiation Responding to change over following a plan That is, while there is value on the items on the right, we value the items on the left more.

Principles behind the Agile Manifesto We follow these principles: • Our highest priority is to satisfy the customer through early and continuous delivery of valuable software. • Welcoming changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage. • Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to shorter time scales. • Business people and developers must work together daily throughout the project. • Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done. • The most efficient and effective method of conveying information to and within a development team is face-to-face conversation. • Working software is the primary measure of progress. • Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely. • Continuous attention to technical excellence and good design enhances agility. • Simplicity — the art of maximizing the amount of work not done — is essential. • The best architectures, requirements, and designs emerge from self-organized teams. • At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.

Agile Methods The number of methods that claim to align the Agile Manifesto will continue to grow with the popularity of the agile software methodologies. The early initial methodologies include:* • Extreme Programming • SCRUM • Crystal • Feature Driven Development • Lean Development • Adaptive Software Development • DSDM *Jim Highsmith, Agile Software Development Ecosystems, Addison-Wesley, 2002, provides a comparison of these methodologies.

515

PROVIDING APPLICATION SOLUTIONS ment but goes further to specify a simple set of practices. Whereas many popular methodologies try to answer the question “What are all of the practices I might ever need on a software project?,” XP simply asks, “What is the simplest set of practices I could possibly need and what do I need to do to limit my needs to those practices?” The significance of this difference cannot be understated. The most frequent critique of XP is that it is too simple to work beyond a narrow set of project criteria. Yet, the set of known successes with XP continues to stretch the breadth of projects applicable for XP. It would seem that the parameters that we use to determine what methods are appropriate for what project are still inadequate. To many, XP is a set of 12 interdependent software development practices. Used together, these practices have had much success, initially with small teams, working on projects with high degrees of change. However, the more one works with XP, the more it is apparent that the practices do not capture the essence of XP. As with the heavier methods, some teams have great success with the XP practices, some less so. Some larger teams have greater success than smaller ones. Some teams with legacy code have success; others do not. There is something more than just the practices that enables teams to succeed with XP. This extra attribute of XP is XP Values. What Is Extreme Programming? Extreme Programming is a discipline of software development based on values of simplicity, communication, feedback, and courage. It works by bringing the whole team together in the presence of simple practices, with enough feedback to enable the team to see where they are and to tune the practices to their unique situation. In XP, every contributor to the project is a member of the “Whole Team,” a single business/development/testing team that handles all aspects of the development. Central to the team is the “Customer,” one or more business representatives who sit with the team and work with them daily. XP teams use a simple form of planning and tracking to decide what to do next and to predict when any desired feature set will be delivered. Focused on business value, the team produces the software in a series of small, fully integrated releases that pass all the tests that the Customer has defined. The core XP practices for the above are called Whole Team, Planning Game, Small Releases, and Acceptance Tests. There are specific recommendations for all of these, which are briefly discussed here and as the chapter progresses. 516

Extreme Programming and Agile Software Development Methodologies Extreme Programmers work together in pairs and as a group, with simple design and obsessively tested code, improving the design continually to keep it always just right for the current needs. The core XP practices here are Pair Programming, Simple Design, TestDriven Development, and Design Improvement. The XP team keeps the system integrated and running all the time. The programmers write all production code in pairs, and all work together all the time. They code in a consistent style so that everyone can understand and improve all the code as needed. The additional practices here are called Continuous Integration, Team Code Ownership, and Coding Standard. The XP team shares a common and simple picture of what the system looks like. Everyone works at a pace that can be sustained indefinitely. These practices are called Metaphor and Sustainable Pace. XP Values The XP Values are Communication, Simplicity, Feedback, and Courage. The essence [of XP] truly is simple. Be together with your customer and fellow programmers, and talk to each other. Use simple design and programming practices, and simple methods of planning, tracking, and reporting. Test your program and your practices, using feedback to steer the project. Working together this way gives the team courage.4

These values guide our actions on the project. The practices leverage these values to remove complexity from the process. The impact of the XP Values is significant and unique. XP remains the only methodology that is explicit in its values and practices. This combination gives specific guidance on what (the practices) to do on a project, but also on how to react (defer to the values) when the practices do not seem to be working or are not sufficient. Most methods are specific on practices, some specify principles, but few combine both.5 For example, CMMI describes Key Practice Areas (KPAs) but does not articulate a set of values or principles. RUP provides guiding principles, such as Develop Iteratively, but does not include values that give guidance beyond the software development practices. How these values are used to guide the team in its use of the practices is described later in the section “Fitting XP to Your Project.” See Exhibit 2 for a comparison of several well-known methodologies. Organization On a project using XP, there are two explicit roles or teams defined: the Customer and the Programmer. In keeping with the value of simplicity, most of the XP literature describes the customer as a single person who can represent the requirements, acceptance criteria, and business value for the 517

PROVIDING APPLICATION SOLUTIONS Exhibit 2.

Comparison of Methodologies

Methodologya

Values

Principles

Practices

CMMI SA/SD RUP Agile XP

No No No Yes Yes

No No Yes Yes Yes

Yes (KPAs) Yes Yes No Yes

a

CMMI: Capability Maturity Model Integration, Software Engineering Institute

SA/SD: Structured Analysis/Structured Design RUP: Rational Unified Process, Rational Corporation

project. In practice, it is a team of people that communicates with one voice with the Programming Team. As such, this role is also referred to as the Customer Team. This chapter uses the term “Customer” to describe the role, whether acted on by an individual or a team. The Programmer is a member of the Programming Team that implements the XP Customer Team’s requirements. Again, the convention will be to use the term “Programmer” to describe an individual or the team. On all but the smallest projects, there will also be a Management Team that allocates resources for the teams, manages the alignment of the project to the goals of the business, and removes any obstacles impeding the team’s progress. Extreme Programming does not specify management practices. XP attempts to simplify management by empowering the Customer and Programmer to make most of the decisions regarding the project. Often, XP teams are described as self-managing. As projects grow in size and complexity, more management is typically required to coordinate the efforts of different teams. Many of the other emerging agile methodologies are focusing more attention on management practices, such as Scrum,6 Lean Development,7 and Extreme Project Management.8 The Rhythm of an XP Project An XP project proceeds in iterations of two weeks in length. Each iteration delivers fully developed and tested software that meets the most valuable small set of the full project’s requirements. Exhibit 3 shows the primary activities of the Customer and Programmer during the initial iterations of a project. The project proceeds in a steady rhythm of delivering more functionality. The Customer determines at what point in time the full system can be released and deployed. Core Practices There are 12 core practices that define XP. Teams new to XP should focus on using and developing skills with these practices. Over time, as the team matures in its use of XP, it will continue to check its proficiency with these 518

Extreme Programming and Agile Software Development Methodologies

Exploration Story writing Alignment Customer tests

Programming Team

Customer Team

Week 0 Preparing for the first iteration

Agree on process exploration

Spikes Development Environment Preparations

Weeks 1 and 2

Weeks 3 and 4

Iteration 1

Iteration 2

Prep for next iteration Manage release schedule Communicate with end users/stakeholders Customer tests

Prep for next iteration Manage release schedule Communicate with end users/stakeholders Customer tests

Week ‘n’ The rhythm Continues…

Daily stand up Daily stand up Sit together Sit together Discuss detailed requirements Discuss detailed requirements Run customer tests continuously Run customer tests continuously Tasks Continuous builds Some spikes Estimation of new stories for later iterations

Tasks Continuous builds Some spikes Estimation of new stories for later iterations

Exhibit 3. The Rhythm of an XP Project

practices, but will also tailor the practices to the project needs. XP teams are encouraged to use feedback from their project to adapt, add, and eliminate practices as needed. A number of other practices are popular on XP teams and some of these are described later. The practices can be described as a cycle of activities (see Exhibit 4). The inner circle describes the tight cycle of the Programmers. The outer loop describes the planning cycle that occurs between the Customers and Programmers. The middle loop shows practices that help the team communicate and coordinate the delivery of quality software. Whole Team. All the contributors to an XP project sit together as members of one team. This team must include a business representative — the Customer — who provides the requirements, sets the priorities, and steers the project. It is best if the Customer or one of her aides is a real end user who knows the domain and what is needed. The team will, of course, have programmers. The team will typically include testers, who help the Customer define the customer acceptance tests. Analysts may serve as helpers to the Customer, helping to define the requirements. There is commonly a coach who helps the team stay on track and facilitates the process. There may be a manager, providing resources, handling external communication, and coordinating activities. None of these roles is necessarily the exclusive property of just one individual. Everyone on an XP 519

PROVIDING APPLICATION SOLUTIONS

:KROH7HDP 6LW7RJHWKHU

&ROOHFWLYH 2ZQHUVKLS &XVWRPHU7HVWV $FFHSWDQFH7HVWV

'HVLJQ ,PSURYHPHQWV

3DLU 3URJUDPPLQJ &RQWLQXRXV ,QWHJUDWLRQ

&RGLQJ 6WDQGDUG

7HVW)LUVW 'HVLJQ

5HIDFWRULQJ

6LPSOH 'HVLJQ

3ODQQLQJ *DPH

6XVWDLQDEOH 3DFH

0HWDSKRU

6PDOO5HOHDVHV

Exhibit 4. XP Practices and the Circle of Life

team contributes in any way that he or she can. The best teams have no specialists, only general contributors with special skills. Planning Game. XP planning addresses two key questions in software development: predicting what will be accomplished by the due date, and determining what to do next. The emphasis is on steering the project — which is quite straightforward — rather than on exact prediction of what will be needed and how long it will take — which is quite difficult. There are two key planning steps in XP:

1. Release planning is a practice where the Customer presents the desired features to the programmers, and the programmers estimate their difficulty. With the cost estimates in hand, and with knowledge of the importance of the features, the Customer lays out a plan for the project. Initial release plans are necessarily imprecise; neither the priorities nor the estimates are truly solid, and until the team begins to work, we will not know just how fast they will go. Even the first release plan is accurate enough for decision making, however, and XP teams revise the release plan regularly. 2. Iteration planning is the practice whereby the team is given direction every couple of weeks. XP teams build software in two-week “iterations,” delivering running, useful software at the end of each iteration. During Iteration Planning, the Customer presents the features desired for the next two weeks. The programmers break them down into tasks and estimate their cost (at a finer level of detail than in 520

Extreme Programming and Agile Software Development Methodologies Release Planning). Based on the amount of work accomplished in the previous iteration, the team signs up for what will be undertaken in the current iteration. These planning steps are very simple yet they provide very good information and excellent steering control in the hands of the Customer. Every couple of weeks, the amount of progress is entirely visible. There is no “90 percent done” in XP: a feature story was completed, or it was not. This focus on visibility results in a nice little paradox. On the one hand, with so much visibility, the Customer is in a position to cancel the project if progress is not sufficient. On the other hand, progress is so visible, and the ability to decide what will be done next is so complete, that XP projects tend to deliver more of what is needed, with less pressure and stress. Customer Tests. As part of presenting each desired feature, the XP Customer defines one or more automated acceptance tests to show that the feature is working. The team builds these tests and uses them to prove to themselves, and to the customers, that the feature is implemented correctly. Automation is important because in the press of time, manual tests are skipped. That is like turning off your lights when the night gets darkest.

The best XP teams treat their customer tests the same way they do programmer tests: once the test runs, the team keeps it running correctly thereafter. This means that the system only improves, always notching forward, and never backsliding. Small Releases. XP teams practice small releases in two important ways. First, the team releases running, tested software, delivering business value chosen by the Customer, with every iteration. The Customer can use this software for any purpose, either for evaluation or even for release to end users (which is highly recommended). The most important aspect is that the software is visible, and given to the customer at the end of every iteration. This keeps everything open and tangible. Second, XP teams also release software to their end users frequently. XP Web projects release as often as daily, in-house projects monthly or more frequently. Even shrinkwrapped products are shipped as often as quarterly.

It might seem impossible to create good versions this often but XP teams are doing it all the time. See the section on “Continuous Integration” for more on this, and note that these frequent releases are kept reliable by XP’s obsession with testing, as described in the sections on “Customer Tests” and “Test-Driven Development.” Simple Design. XP teams build software to a simple design. They start simple, and through programmer testing and design improvement, they keep it that way. An XP team keeps the design exactly suited for the current 521

PROVIDING APPLICATION SOLUTIONS functionality of the system. There is no wasted motion, and the software is always ready for what is next. Design in XP is neither a one-time thing nor an up-front thing, but it is an all-the-time thing. There are design steps in release planning and iteration planning, plus teams engage in quick design sessions and design revisions through refactoring, throughout the course of the entire project. In an incremental, iterative process like Extreme Programming, good design is essential. Pair Programming. In XP, two programmers, sitting side by side, at the same machine build all production software. This practice ensures that all production code is reviewed by at least one other programmer, resulting in better design, better testing, and better code.

It may seem inefficient to have two programmers doing “one programmer’s job,” but the reverse is true. Research on pair programming shows that pairing produces better code in about the same time as programmers working singly. That is right: two heads really are better than one! It does take some practice to do well, and you need to do it well for a few weeks to see the results. Most programmers who learn pair programming prefer it, so we highly recommend it to all teams. Pairing, in addition to providing better code and tests, also serves to communicate knowledge throughout the team. As pairs switch, everyone gets the benefits of everyone’s specialized knowledge. Programmers learn, their skills improve, and they become move valuable to the team and to the company. Pairing, even on its own outside of XP, is a big win for everyone. Test-Driven Development. XP is obsessed with feedback; and in software development, good feedback requires good testing. XP teams practice “test-driven development,” working in very short cycles of adding a test, then making it work. Almost effortlessly, teams produce code with nearly 100 percent test coverage, which is a great step forward in most shops. (If your programmers are already doing even more sophisticated testing, more power to you. Keep it up, it can only help.)

It is not enough to write tests; you have to run them. Here, too, XP is extreme. These “programmer tests,” or “unit tests,” are all collected together, and every time any programmer releases any code to the repository (and pairs typically release twice a day or more), every single one of the programmer tests must run correctly. One hundred percent, all the time! This means that programmers get immediate feedback on how they are doing. Additionally, these tests provide invaluable support as the software design is improved. 522

Extreme Programming and Agile Software Development Methodologies Design Improvement. XP focuses on delivering business value in every iteration. To accomplish this over the course of the whole project, the software must be well designed. The alternative would be to slow down and ultimately get stuck. So, XP uses a process of continuous design improvement called “refactoring.”9

The refactoring process focuses on the removal of duplication (a sure sign of poor design), and on increasing the “cohesion” of the code while lowering the “coupling.” High cohesion and low coupling have been recognized as the hallmarks of well-designed code for at least 30 years.10 The result is that XP teams start with a good, simple design, and always have a good, simple design for the software. This lets them sustain their development speed and, in fact, generally increase speed as the project goes forward. Refactoring is, of course, strongly supported by comprehensive testing that ensures that as the design evolves, nothing is broken. Thus, the customer tests and programmer tests are a critical enabling factor. The XP practices support each other: they are stronger together than separately. Continuous Integration. XP teams keep the system fully integrated at all times. We say that daily builds are for wimps; XP teams build multiple times per day. (One XP team of 40 people builds at least eight or ten times per day!) The benefit of this practice can be seen by thinking back on projects you may have heard about (or even been a part of), where the build process was weekly or less frequently and usually led to “integration hell,” where everything broke and no one knew why.

Infrequent integration leads to serious problems on a software project. First of all, although integration is critical to shipping good working code, the team is not practiced at it, and often it is delegated to people who are not familiar with the whole system. Second, infrequently integrated code is often — or usually — buggy code. Problems creep in at integration time that are not detected by any of the testing that takes place on an nonintegrated system. Third, a weak integration process leads to long code freezes. Code freezes mean that you have long time periods when the programmers could be working on important shippable features, but that those features must be held back. This weakens your position in the market or with your end users. Collective Code Ownership. On an XP project, any pair of programmers can improve any code at anytime. This means that all code gets the benefit of many people’s attention, which increases code quality and reduces defects. There is another important benefit as well: when code is owned by individuals, required features are often put in the wrong place as one programmer discovers that he needs a feature somewhere in code that he 523

PROVIDING APPLICATION SOLUTIONS does not own. The owner is too busy to do it, so the programmer puts the feature in his own code, where it does not belong. This leads to ugly, hardto-maintain code, full of duplication and with low (bad) cohesion. Collective ownership could be a problem if people worked blindly on code they do not understand. XP avoids these problems through two key techniques: (1) the programmer tests catch mistakes, and (2) pair programming means that the best way to work on unfamiliar code is to pair with the expert. In addition to ensuring good modifications when needed, this practice spreads knowledge throughout the team. Coding Standard. XP teams follow a common coding standard so that all the code in the system looks as if a single — very competent — individual wrote it. The specifics of the standard are not important; what is important is that all the code looks familiar, in support of collective ownership. Metaphor. XP teams develop a common vision of how the program works, which we call the “metaphor.” At its best, the metaphor is a simple, evocative description of how the program works, such as “this program works like a hive of bees, going out for pollen and bringing it back to the hive” as a description for an agent-based information retrieval system.

Sometimes, a sufficiently poetic metaphor does not arise. In any case, with or without vivid imagery, XP teams use a common system of names to be sure that everyone understands how the system works and where to look to find functionality or to find the right place to put the functionality that is about to be added. Sustainable Pace. XP teams are in it for the long term. They work hard, but at a pace that can be sustained indefinitely. This means that they work overtime when it is effective, and that they normally work in such a way as to maximize productivity week in and week out. It is well understood these days that “death march” projects are neither productive nor produce quality software. XP teams are in it to win, not to die.

Other Common Practices The core practices of XP do not specify all of the activities that are required to deliver a software project. As teams use XP, many find that other practices aid in their success, in some cases as significantly as some of the core practices. The following are some other practices commonly used by successful XP teams. Open Workspace. To maximize communication among the Whole Team, the team works together in an “open workspace.” This is a large room, with tables in the center that can typically seat two to four pairs of developers. Exhibit 5 shows an example of an XP workstation for three pairs of devel524

Extreme Programming and Agile Software Development Methodologies

Exhibit 5. Extreme Programming Workstation

opers. By sitting together, all team members can establish instant communication when needed for the project. Teams establish their own rules concerning their space to ensure that everyone can work effectively. The walls of the “open workspace” are used to display information about the project. This will include big visible charts of metrics such as passing acceptance tests and team productivity. There may be designs drawn on whiteboards. Project status will be displayed so that any participant or stakeholder of the project can always see progress. Retrospectives. The XP practices provide feedback to the team as to the quality of the code and its alignment to the Customers’ needs. The team also needs feedback on how it is performing. Is it following the practices with discipline? Are there adaptations to the practices that would benefit the team? The practice commonly used for this is the Retrospective.11 After each iteration, the team does a short reflection on what went well during the iteration and what should be improved in the next iteration. After a release of the product, a more in-depth Retrospective is performed on the whole project. Self-Directed Teams. A practice that is common among most of the agile

methods is self-directed teams. The best people to make decisions about the project are those closest to the details, as long as they have an understanding of the overall goals of the project. Open communication allows team members to have the information required to make decisions. Man525

PROVIDING APPLICATION SOLUTIONS agers are part of the communication loop but not bottlenecks in the decision-making flow. Customer Team. As XP is used on projects with more complex requirements, a team performs the Customer function. For larger or more complex projects, the Customer team may even exceed the Programming team in size. Some of the challenges faced by the Customer team include communicating with and balancing the needs of multiple stakeholders, allocating resources to the appropriate projects or features, and providing sufficient feedback to ensure that the requirements implemented achieve the stakeholders’ goals. The specific Customer Team practices are still emerging in the agile community. The practices are guided by the same values as the other XP practices.

FITTING XP TO YOUR PROJECT Is My Project a Good Fit for XP? Probably the most commonly debated question regarding XP is whether it can be used successfully on a particular type of project. Experience is proving that, as with other approaches to software development, the limitations often include the characteristics of the project, the people on the team, and the organization in which they work. To evaluate whether the XP practices can help a team achieve greater success on their project, consideration must be given to the project characteristics, the people on the team, and the cultures of the organizations involved in the project. The XP Values can be used as a template to test the fit of XP to a project, team, and organization. Simply evaluate the degree to which each value is currently held by the team and the organization. Communication. Does the team communicate constantly and effectively? Does this communication extend to the customer? Is the team’s software readable and understandable (i.e., is it easy for Programmers to communicate with the code)? Simplicity. Is the team comfortable with simple solutions? Can the team implement, without a complete design, the system prior to coding? Is the team comfortable with some ambiguity as to the exact requirements and designs? Can the team adapt often to changing requirements? Is the team working new code or code that is well designed and refactored? Feedback. Can the team get feedback on its tasks and deliverables often? Does the team accept feedback constructively? When there are problems, does the team focus on the process to identify root causes (rather than the people)? How often does the team integrate, build, and test the complete software system? 526

Extreme Programming and Agile Software Development Methodologies Courage. Does the organization encourage individuals to not fear failure? Are individuals and teams encouraged to show initiative and make decisions for their projects? Are organizational boundaries easily crossed to solve problems on the project?

Typically, the greater the degree to which the team can answer these questions affirmatively, the fewer changes will be required and the easier it is for the team and organization to adopt XP. Some specific project and team guidelines for getting started are provided next. Getting Started When selecting an initial project on which to try XP, one must consider the challenges of using the new practices. New practices introduce risk to a project. Care must be taken to select an initial project that is not burdened by all of the most difficult obstacles to using XP, but does address enough typical obstacles so that the success of the initial project can provide the basis for expanding to the rest of the organization. Although most initial XP projects are not this fortunate, ideally, the initial project will have many of the following characteristics: • Primarily new code versus legacy updates • An identified and available source of requirements and feedback (i.e., on-site customer) • Delivers important business value and has management visibility • Uses an OO language/environment • Is typical of the projects the organization will be doing in the future • Has a co-located team in an open workspace • Can be delivered to the end user incrementally, with a new stage once in at least every four to six weeks In selecting the initial XP Project Team, the main attribute of the team members should be a strong commitment to delivering the project and achieving its goals using the new practices. Some healthy skepticism about XP is acceptable as long as the team members are willing to use the practices and let data and experience from the project guide any adaptations. The team ideally will have a few technical leaders familiar with other projects in the organization, but it is not desirable to have a team full of the most senior people. XP is a collaborative approach to development and, as such, the initial project will benefit from members with strong “soft” skills who prefer collaborative work environments. Beyond these characteristics, the team should be representative of teams that the organization will use in the future. The simplest way to reduce risk on an initial project is maximize the skill of the team as quickly as possible. This can be achieved through recruiting 527

PROVIDING APPLICATION SOLUTIONS team members that are already skilled in XP, training, or experienced coaching for an inexperienced team. Adaptations As teams begin adopting the XP practices, numerous obstacles and constraints must be confronted. The team may have trouble gaining access to the Customer every day. The team may have trouble co-locating to an open workspace. The team may be so large that communicating without formal documentation is not feasible. How do we adjust? Must we abandon XP? The XP Values guide teams in solving these process problems with their projects. The Courage value guides us to aggressively confront and remove any obstacles that would add steps, artifacts, or complexity to the process. This often means letting common sense outweigh bureaucracy. For example, teams sometimes do not feel empowered to change the physical work environment to have an open workspace (i.e., change the cubicles). Often, a little courage, negotiating, and a power screwdriver will remove this obstacle. Some teams struggle to have a customer sitting with the team. The programmers develop from a requirements document and have never spoken to the customer. Although the thought of having a customer present is desirable, the logistics can seem impossible, particularly if the best person to sit with the team does not live near the team or is constantly traveling. Often, with a slight reorganization and a modified communication infrastructure, a customer can be identified who can sit with the team on a frequent basis. Of course, courage can only take us so far. There will be constraints that interfere with our ability to implement the practices as described. A common example is legacy code. Many teams work with large code bases that do not have tests and are in dire need of design improvement. We want to aggressively move to the state where all of the code has passing tests, is understandable, and is well designed. The initial attempt is to rapidly get the code up to our new standard. Can we toss it and rewrite it? Would it really be that expensive and time-consuming to fix it? Is there other, cleaner code available with which we can replace it? Very often, the answers are No, Yes, and No, respectively, leaving team members no choice but to live with the smelly code and improve as they can. XP Values give the team a helpful, simple tool to deal with this difficult, yet inevitable challenge. The constraint that causes a practice to be modified or abandoned is reviewed against each of the XP Values, using the following question: How will the influence of this XP Value be diminished as a result? In the case of our untestable legacy code, a quick brainstorming session by the team might yield the ideas in the Impact column of Exhibit 528

Extreme Programming and Agile Software Development Methodologies Exhibit 6.

Constraint: Legacy Code Prevents Test-Driven Development

Value

Impact

Communication

Difficult to communicate the code’s intent and design

Simplicity

Feedback

Courage

Adaptation Alternatives

• Wikia pages to document the team’s understanding of the code • Reverse-engineering tools to create models of the code • Changing pair partners at least twice a day Complexity restricts • Targeting areas of the system that some simple design warrant simplification alternatives • When changes are being made in a complex area, ensuring that one of the pair partners is new to the code It will take longer to be • Ensuring that the tests that exist are aware of and fix errors run every day • Add at least one automated test every time a defect is fixed Cannot proceed as • When stories require changes to aggressively with certain legacy code, increasing the amount of simple design design discussion during the Iteration Planning Meeting

a

A Wiki is a Web-based collaborative repository popular with XP teams. See www.wiki.org for more information.

6. The team discusses ways to adapt the process that is guided by the values, yielding something similar to the Adaptation Alternatives column. Each alternative that the team considers is checked for its alignment to the values. A misaligned example, an alternative that states “all legacy code changes must be approved by a Change Control Board (CCB) prior to implementation,” may be viable, but it is not simple to implement. It reduces the frequency of feedback while we wait for the CCB to meet, and takes empowerment away from the programmers, thus reducing their Courage. Other alternatives that address the constraint that align closer to the XP Values are preferred. Using this simple technique, teams adapt the XP Practices to their project and team needs. The importance of starting with Courage cannot be overstated. Many teams have been able to achieve a level of simplicity in their practices beyond what was thought possible. Although this may appear to introduce risk, Retrospectives after each iteration mitigate that risk by helping the team understand where additional adaptations are required. SUMMARY The pace of change in the software development industry remains at high. People continue to push the boundaries of known techniques and prac529

PROVIDING APPLICATION SOLUTIONS tices in an effort to develop software as efficiently and effectively as possible. Extreme Programming and Agile Software Methodologies have emerged as an alternative to comprehensive methods designed primarily for very large projects. Teams using XP are delivering software often and with very low defect rates. As the industry continues to evolve, we are likely to see additional insights on how to leverage collaborative work on more and more types of projects. Notes 1. Beck, K., Extreme Programming Explained, Addison Wesley Longman, 2000. 2. www.agilemanifesto.org. 3. Sliwa, C., “Agile Programming Techniques Spark Interest,” Computerworld.com, March 14, 2002. 4. Jeffries, R. et al., Extreme Programming Installed, Addison Wesley Longman, 2001, 172. 5. It is likely true that skilled practitioners of most methods are guided by a set of values, perhaps dictated by the culture of the organization, perhaps dictated by the leadership of the team. It is the expression of the values as integral to the practices that makes XP unique. 6. Schwaber, K. and Beedle, M., Agile Software Development with Scrum, Prentice Hall, 2002. 7. Mary Poppendieck, Lean Development: A Toolkit for Software Development Managers, Addison-Wesley, to be published in April 2003. 8. Thomsett, R., Radical Project Management, Prentice Hall, 2002. 9. Fowler, M. et al., Refactoring: Improving the Design of Existing Code, Addison-Wesley, 1999. 10. For more information on good software design principles and their application on agile software projects, see Martin, R.C., Agile Software Development: Principles, Patterns, and Practices, Pearson Education, 2003. 11. Kerth, N., Project Retrospectives, Dorsett House, 2001.

530

Chapter 42

Component-Based IS Architecture Les Waguespack William T. Schiano

A component is a building block of computer software systems. Modular construction has been the standard of software architecture for decades. What makes components intriguing now is the integration of technologies to support components, the use of components to compose business applications rather than the underlying system software, and component deployment and distribution over the Internet. These factors converge, leading to the prospect of shortened application development time and increased application software quality. Problem decomposition is a fundamental technique in systems analysis. Through the process of decomposing the whole, one discovers the units of information and function that must come together to define and achieve the behavior of the whole. This guiding principle permeates architecture and civil, electrical, and electronic engineering. It is the same in software engineering employed in the construction of computer-based information systems. Component technology is an outgrowth of the object-oriented paradigm of system modeling, and it mirrors the technique of problem decomposition by encouraging system composition using a collection of interacting, but independently constructed parts. In this context, component takes on the specialized meaning for which we use the term in this chapter. This chapter discusses the definition, supporting technology, use, construction, and economics of components. The goal is to explain how building components and building applications using components differ from traditional software development practice. Components have the potential to shorten application development time and reduce overall development costs. To capitalize on this potential, organizations must prepare their processes and personnel for component-based application building. For the component-based approach to be cost-effective, organizations must iden0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

531

PROVIDING APPLICATION SOLUTIONS tify requirements for componentization with a sufficiently broad internal or external market for use and reuse. The supply and demand side issues interact in a complex economic viability model that extends beyond organizational boundaries. DEFINING COMPONENTS The industry has not yet settled on a universally accepted definition of “component.” In the broadest sense, a component is an artifact of systems development manufactured explicitly for the purpose of being used in the construction of multiple systems by multiple development groups. This definition encompasses most knowledge or artifacts reused in system building. That could range from documentation standards and templates to library subroutines and programming languages. How a component is used is a better way to define it than how it is built. What a component is depends on how it is used. This chapter focuses on the role of components as building blocks in business applications — business domain components. We choose this focus to exclude building blocks of systems software (e.g., device drivers, graphics tool sets, or database management packages). The stakeholders of system software components are hardware and operating system vendors whose concern is platform efficiency and interoperability. The stakeholders of business domain components are business users, managers, and systems analysts striving to satisfy requirements driven by business processes, business markets, and government regulations. We use the term “component” to mean a business domain component for the remainder of this chapter. A component refers to an element of software that is clearly defined and separable from the system. It interacts with the rest of the system through an explicitly defined interface. Except for the interface, a user’s knowledge of the component’s internals is unnecessary for it to properly function in the rest of the system. Recent advances in software system architecture have enabled greater dispersion of components on networks and on the Internet and have therefore been catalysts for independent component development and their interchangeable combination. A component can be used in myriad system constructions. Components marketed by out-of-house (third-party) producers are called off-the-shelf components. If a system developer can locate and use an off-the-shelf component, the cost of building a new, perhaps a one-timeonly, solution for a particular requirement can be avoided. Off-the-shelf components arrive packaged, documented, and tested; they pose a costsaving alternative to developing new software for each new system requirement. 532

Component-Based IS Architecture Components Based on Business Requirements Application systems for a particular organizational function (or cross-functional process) often share a variety of very similar computational requirements. Domain analysts recognize these similar requirements as opportunities to define components. Architects can use components during requirements analysis to frame an application domain. Components are stable architectural primitives to be combined or arranged to suit a business procedure or process. When the procedure or process changes, components can be rearranged quickly and easily without having to revisit the component definition. Systems analysts can use components as a lexicon for expressing individual application requirements. Designers can use components as predefined parts around which to structure a system. Programmers can use components as tangible building blocks to construct executable software. At each step of software development, a component represents an explicit understanding within the application domain: what information is germane, what actions are relevant, how action is triggered, and what consequence an action has. This constitutes a contract among the members of the extended development team — the component is a stable, reliable, known quantity. Honoring the component definitions and interfaces obviates understanding their inner workings. Developers are free to ignore details other than those pertaining to business domain function to the extent that the collection of components represents a complete lexicon of the application domain. Application development becomes a construction set exercise. Component Enabling Technology Today’s mainframe, desktop, portable, and handheld computers are more easily interconnected than ever before. Because this connectivity is virtually continuous, systems architects often think of their systems as network based or Internet based rather than computer based. If a resource exists anywhere on the network-based system, it is readily accessible by other computers on that network. This distributed computing model is fertile ground for components. As an example, an application designer may need some computation (e.g., credit card validation, decryption, calendar arithmetic, format translation, etc.). If the designer can find a suitable service on the network, engaging that service rather than programming it from scratch may result in cost savings. The technology that enables components is the component framework. A component framework is a combination of protocols and system software that resides on both the component server and client computer (see Exhibit 1). The framework handles the interface connections between an application and a component. The component resides on the server com533

PROVIDING APPLICATION SOLUTIONS

Application Network / Internet Connection

Component Framework

Component Framework

Operating System

Operating System

Computer Host/Hardware

Computer Host/Hardware

Component

Exhibit 1.

Component Tap

Component Architecture

puter. The application (component user) resides on the client computer. They may be the same or different computers on a network. The framework hides as many details about the component implementation, the host operating system and hardware, and the network connection as possible. A component framework provides a communication path between the application and component. Regardless of the surrounding environment, the application can ignore how the component is implemented or even where it is located, and vice versa. A component framework achieves component/application independence by providing a set of component services relating to naming, event handling, transactions, persistence, and security. These services combine to provide a transaction processing environment. Component naming services permit applications and components to identify and locate one another. Once they establish communication, they exchange information about their capabilities, services, and information formats. Component event services enable applications and components to get each other’s attention and synchronize actions. Transaction services define collections of application activity that require all-or-nothing completion, regardless of local or remote component interaction. Component persistence services allow components to perform database management activities. Component security services, coupled with naming services, provide for client/server authentication and control access to both information objects 534

Component-Based IS Architecture and component functionality. All the services described herein are found to some degree in every component framework, but the developer’s (or vendor’s) styles and approaches differ widely. Component Framework Products This section presents a brief survey of component frameworks. We choose these frameworks to illustrate the evolution of standards and product features over the past decade or so. • CORBA. Common Object Request Broker Architecture (CORBA) is a component framework standard promulgated by the Object Management Group (OMG), a consortium of major computing industry players. CORBA defines protocols, ser vices, and an execution environment for components. The OMG does not market components or a component framework. It enforces the CORBA standard and provides test suites to certify CORBA compliance. CORBA compliance assures component producers, component framework producers, and their customers that applications and components will interact reliably and consistently regardless of the vendor. The OMG aggressively solicits suggestions for extensions and improvements to the CORBA standard and publishes a wide variety of supporting technical documentation and training materials. The CORBA standard has been instrumental in enabling the cross-vendor, out-of-house component market. • Microsoft ActiveX. Microsoft has defined four standards along the way with a suite of products it markets to support components. Microsoft’s industry clout has given these standards some prominence. ActiveX predates Microsoft’s other component enabling tools. ActiveX uses a Web browser on a client machine to download an ActiveX module. Once downloaded, it executes directly on the client’s underlying MS Windows machine. ActiveX might be described as “just-intime downloaded programs.” • Microsoft DCOM. Distributed Component Object Model (DCOM) is an inter-object communications protocol designed to allow components on various nodes of a Microsoft network to interact. It extends the functionality of the earlier standard, COM, which provided similar function, but on a single host computer. Tightly coupled with Microsoft’s operating system product line, MS Windows, COM, and DCOM support a thriving market in components for MS Windows applications, including MS Office, Visual Basic, Visual C++, and J++. • Microsoft .NET. Microsoft’s most recent offering is .NET. It extends DCOM, adding a robust component development and configuration environment. It provides application-to-component connectivity, regardless of their location on a computer, a network, or the Internet. 535

PROVIDING APPLICATION SOLUTIONS The .NET product suite includes program editing, compiling, and debugging in an integrated environment, Visual Studio. Several .NETcompliant programming languages are available to develop both applications and components. • Enterprise JavaBeans and J2EE. Another major player in component frameworks is Sun Microsystems. With its invention of Java, Sun pioneered the “just-in-time downloaded programs” not only for Web browsers on MS Windows, but for any Web browser with a compliant Java virtual machine (JVM). JVMs are available for most contemporary operating systems with or without Web browsers. Unlike ActiveX, which is executable object code for MS Windows-based computers, Java compiles into machine-independent byte-code. Byte-code then executes interpretively on a JVM on the client machine. Java, unlike ActiveX, is both machine and operating system independent; and because it is interpreted, it does not pose the security problems that downloaded executable object code presents. Java is a popular choice of developers who need “program once” and “run anywhere” software. Sun Microsystems builds upon its Java base with Enterprise Java Beans and J2EE that support a program editing, compiling, and debugging environment for Java-based components. J2EE and .NET represent the state-of-the-art in Internet-enabled, distributed component architecture technology. Each represents the evolution of component support from simply enabling components to connect and communicate, to an insulating enclosure of development, communication, database, and process management services that thoroughly enables (enforces) a model of scalable, distributed enterprise computing. BUILDING APPLICATIONS USING COMPONENTS Building applications using components differs from traditional application software development practice where programmers build information systems composed of tailor-made, one-of-a-kind, built-from-scratch programs. Effective use of components to build a new system requires a development process that is component aware. Component awareness refers to an approach in which a high value (within the organization) is placed on using components rather than building software from scratch, for (1) identifying recurring requirements suitable for component solutions and (2) streamlining processes to make component use efficient. Regardless of the specific steps or the specific sequence in the software process, each step requires component awareness. Either all the personnel involved in the application building process must become individually component aware, or specific personnel are assigned to oversee the use of components, or a combination of both. 536

Component-Based IS Architecture Exhibit 2.

Component Reusability Assessment Checklist

• What does it do? — Is it designed for our application domain or do we have to adapt it? — Does it support application domain functionality standards, ISO, FASB, IRS, FDA, etc.? • Who provides it? — Does it come from in-house or a third party? — What form of warranty or guarantee does the provider offer? — Is the provider organizationally stable/reliable? — Does the provider offer configuration support services or consulting? • What are its environmental requirements? — Does it require a specific component framework environment (e.g., CORBA, DCOM, J2EE, .NET, etc.)? — Does it require a specific configuration tool environment (e.g., Visual Studio, JavaBeans, etc.)? — Does it require specific programming language expertise (e.g., C++, Java, VB, etc.)? — Is it compatible with other components we may be considering for use with it? • How is it used? — Is it configurable (i.e., parameters, extension points, variable through inheritance, etc.)? — Does it generate source code requiring translation or linkage? — Does it run autonomously as a service accessed dynamically via a network or the Internet? — Are there multiple versions of the component, and which one is best for this project?

In a component-aware software process, requirements analysis identifies units of required functionality that are likely to be available as off-theshelf components. Systems designers are familiar with the catalog of available off-the-shelf components relevant to their application domain. Programmers are familiar with the component framework environment into which the components will plug. Programmers are able to configure their noncomponent system pieces to interoperate with that environment as well. Component-aware projects present a new level of challenge for configuration managers. Warranty, availability, licensing, liability, maintainability, and serviceability are all familiar issues with outsourced software or services. The configuration manager’s task is, however, sorely compounded when applications or systems include dozens or even hundreds of components. Each component considered, whether it is used in a project or not, exacts a cost. At a minimum, there is the cost of learning enough about the component to determine whether it is useful or not and, then, whether or not it is feasible to use it. The checklist above (see Exhibit 2) is representative of the information needed to consider a component for use in a partic537

PROVIDING APPLICATION SOLUTIONS ular development project. The questions show a progression of detailed knowledge needed as the project edges closer to actually using a component. Finding candidate components and then learning the answers to these questions may represent a significant expense. To maximize the return on investment, an organization should establish its own internal catalog of the components it has examined, where they have been used, who used them, and what has been learned about them to date. The internal catalog serves as the first point of search when future component requirements arise. It is worth noting that the internal catalog will likely contain a great deal of information about components that have not yet, and may never, be used. To control the costs of finding components and assessing their potential for any given requirement, the system’s project tasks and software development life cycle need to include an explicit component information collection and management activity. These changes in organizational practice are not confined to a small group of workers, but should be widely integrated into the organization’s practices. Such a broad integration relies on a strong management commitment and a disciplined project management approach. It is likely that organizations that are only casual followers of formal software engineering practices will find the task of formally maintaining a component-aware organization very difficult. BUILDING COMPONENTS Just as there are organizational challenges to becoming an effective consumer of components, there are also many challenges to becoming an effective producer of components. Components are intrinsically different from application software. Unlike software written as a part of a larger product, components are intended to stand alone as products themselves. They require their own life-cycle management, testing, documentation, and support. If they are destined for consumers outside the producer’s organization, they require more sophisticated documentation, examples using them, user guides, and customer support. Choosing which component enabling technologies to use is another challenge to component producers. (See “Component Enabling Technology” earlier in this chapter.) By analogy, should we build a toy car part using LEGO bricks or Tinker Toys? The choice affects not only the pool of potential consumers and the production costs, but also has architectural and compatibility implications on the component’s longevity. An even more difficult set of choices is the selection of candidate functionality for prospective components. As discussed in the “Defining Components” section in this chapter, there are few limitations on what can be conceived of or defined as a component: a component is an artifact of sys538

Component-Based IS Architecture tems development manufactured explicitly for the purpose of being used in the construction of multiple systems by multiple development groups. Which components should be built? Should we build them? Intuitively, a component should have a high probability of being used and reused — that is, high reusability. Component reusability results from three characteristics briefly described below: 1. Utility. A component’s function must be relevant to a problem domain. It is common to find components that support a particular genre of system functionality (e.g., GUI services: windows, menus, dialog boxes). It is less common to find components that are application domain focused (e.g., account, client, policy, contract, agreement, etc.). Therefore, the degree of utility depends on the domain of functionality that is a consumer’s focus. The greater the utility, the greater the reusability. (The component does something valuable.) 2. Capacity. A component’s function must be sophisticated enough such that using the component is clearly advantageous compared to build-from-scratch development. Searching, finding, learning, and then using a component is a labor-intensive effort. It would seem that good component candidates would have some degree of complexity in their functionality, along with the testing that would certify reliable and, perhaps, efficient performance. The greater the capacity, the greater the reusability. (What the component does is difficult to build from scratch.) 3. Versatility. A component’s implementation must permit convenient integration into a target application’s structure. Unless a component can be applied in a host application “as is,” some configuration or adaptation is required for one, the other, or both. Adaptations become the maintenance and support responsibility of the component consumer, and these costs may outweigh the reusability benefits of the component. In the extreme, “It may be more trouble than it’s worth!” The greater the versatility, the greater the reusability. (Although the component does not do exactly what is needed or how it is needed, it is easy enough to adapt for the need at hand.) An organization’s success in achieving reusability depends heavily on the producer’s understanding of the consumer’s problem domain. Utility depends on the problem domain almost exclusively. Capacity depends on an understanding of the architectural nature of the problem domain — which requirements are permanent and which requirements are evolving. Permanent requirements would seem to present component candidates with greater longevity potential. Versatility is strongly influenced by the choice of implementation technology and the degree of variability that a 539

PROVIDING APPLICATION SOLUTIONS particular component must support — as in the case of evolutionary requirements. Reusability forms a basis for a cost-benefit analysis to guide the selection of requirement candidates to implement as components. When the producer is also the consumer, an activity called domain analysis needs to be performed. In domain analysis, the component producer attempts to construct a model that abstracts the requirements of prospective consumers of the component in an attempt to identify shared functionality requirements. In the final analysis, it boils down to questions of economics: the consumer asks, “Does building systems using components increase or decrease the system life-cycle costs?” And the producer asks, “Does the economic return on components produced justify the cost of building them?” These questions are explored in the following section. MAKING COMPONENTS COST-EFFECTIVE Components might streamline systems but using them increases the complexity of the development process. Whether an organization is exclusively a consumer of components, a builder of components, or both, all aspects of their software development life cycle are affected. Component Consumer Issues First consider the component consumer and, for the sake of discussion, assume that useful components are readily available. Project management of component-based initiatives involves new challenges. Projects need to integrate a component culture throughout the software development life cycle in requirements specification, design, programming, testing, and maintenance. A third party may control the life cycle of some of a project’s components. In requirements specification, the scope becomes broader, as developers need familiarity with not only their requirements, but also what components are available and the interoperability frameworks on which they depend. Components may work with one framework but not another. In design, developers must choose whether to adapt the design to accommodate an off-the-shelf component, develop software to adapt the component to the requirement at hand, or forego the use of a component and just write code. Developing software to adapt the component may not be cost-effective for a single instance; but if the component is useful in other, sufficiently numerous circumstances, the adaptation cost may be justified. Such a judgment requires thorough knowledge of the require540

Component-Based IS Architecture ments gained through domain analysis, which itself represents a new discipline for many, and will require learning. Maintenance becomes more involved because of the myriad interactions. Requirements may change in one instance of component use, but not in others. A change in one component may require changes in interfaces to software modules or other components and the resulting testing. Components need to be certified reliable and efficient to save a consumer’s effort. Component providers need to demonstrate that they or a third party have certified the component. Unless the consumer can trust the component’s reliability, the consumer must perform certification. When components are certified, testing is limited to how they are configured and integrated and excludes their inner workings. For the sake of the above discussion, it is assumed that components were readily available. However, components may not be readily available for many development projects — from either producers or external vendors. Component Producer Issues Component producers must decide which components to produce. Producers need domain analysis to assess a component’s efficacy. Rather than satisfying a requirement from a single consumer’s perspective, the component producer attempts to divine a utility with the prospect of repeated use by, perhaps, several consumers in many similar requirement situations. Choices must be made about the level of granularity and interoperability. Specialized components with complex functionality may limit the number of potential consumers. Fewer consumers across which to amortize the development cost means higher per-consumer cost to acquire a component. Higher component acquisition cost eats into the potential lifecycle savings of choosing a component over build-from-scratch, and so on. Component function granularity is a delicate design parameter for the component producer. Consumer access to a component depends on which interoperability framework the producer chooses for it. The producer must follow the framework market and assess not only the current capabilities, but also the future capabilities that each framework vendor might be contemplating. These decisions must be revisited each time a component is revised or upgraded. The testing task of component producers is difficult. Although components may have compact and well-defined interfaces, testing must be extremely thorough and perfectly consistent with the component docu541

PROVIDING APPLICATION SOLUTIONS mentation. A major determinant in the component consumer’s adoption decision lies in the prospect of cost savings attributable to the component’s reliability. From the developer’s perspective, this is a major challenge because testing must often occur independent of the eventual installation. Although not commonplace at this writing, third-party certification of compatibility and reliability of components is inevitable as the component market evolves. Documentation extends to configuration and to the domain, which increases the scope and expense of the documentation process. The wider the range of users of the documentation, the greater the importance and the required sophistication of the documentation. Components must be cataloged in a way that enables consumers to find and evaluate them. Shared Producer and Consumer Issues To achieve a significantly reduced time-to-market, a great deal of development must take place in parallel, requiring coordination among groups. While a component-based project may produce fewer lines of original programming, it must address an expanded number of management issues. Reducing the effort expended in traditional development with component reuse requires disciplined and effective management of the development, selection, and use of components. Key to that is cost accounting. Cost accounting is difficult for all IS organizations, and many ignore it or use only rough heuristics. Component reuse introduces additional project costs, including warehousing, delivery, and use costs. Those costs affect each project that uses the component and raise the required initial investment. Any cost savings from reuse accrue back to the component. From the purchase of packaged software through full-service outsourcing of entire departments, outsourcing is a routine part of many information systems (IS) organizations. Similarly, when deciding whether to build or buy components, management must determine if the savings from reuse will exceed the incremental cost of obtaining a reusable component. Typically, savings occur only when a component is reused several times. These reuses often occur over several years, introducing significant time value of money issues. To be cost-effective, organizations must obtain reusable components that maximize the opportunity for repeated instances of reuse and limit the need for writing code. If the organization finds that acquiring a component is the best option, it must still choose where to buy it. Vendor selection processes involve significant investment on the part of the purchasing organization and the selling organization. The benefits of such screening are well established but it is also important to make sure that the costs of the evaluation process do not become too high in the case of inexpensive components. 542

Component-Based IS Architecture There are several viable pricing models for components. Components can be sold to outside consumers, who can then incorporate them into their own systems. They can also be licensed under a variety of terms, including time period, volume, per application, and per execution. Given that components can run on remote servers owned by the developer, licensing based on usage may become more common. CONCLUSION Components encompass many widely accepted traditional software engineering principles implemented with new technologies. The emergence of the standards and frameworks described in this chapter, along with the support of the OMG, reflect the increasing maturity of the process of component development and use. The markets for components have been much slower to evolve. While there are many valuable generic components available, domain-specific components remain in short supply. Until more organizations start to use components, most domain-specific markets will be too small to support third-party development. Organizations considering wide-scale implementation of components will likely need to develop some components in-house.

543

This page intentionally left blank

Chapter 43

Does Your Project Risk Management System Do the Job? Richard B. Lanza

This chapter presents a methodology for ensuring that proper risk management techniques are included in software development projects. It reviews the key benefits of risk management and focuses on common risk management mistakes. When a project is approved, business owners are making a conscious investment in time — and, therefore, money — toward the development of the project's deliverables. Because all these owners are not project managers, it must be ensured that they understand and have promptly quantified the risks affecting the project. Normally, this should be done by the project manager, but many times, depending on the size of the project or the human resources available, anyone on the project team may step in to assess and measure risk. This chapter addresses risk management, regardless of the resource enacting the process. Although the principles discussed could be applied to various types of projects (e.g., bridge construction, railway expansion), this chapter focuses mainly on software development projects. GENERAL DEFINITIONS To understand project risk assessment, it is necessary to start from a common understanding of the following definitions. What Is Project Management? The generally accepted definition is presented by the Project Management Institute in its Project Management Body of Knowledge (PMBOK) as the “application of knowledge, skills, tools, and techniques to project activities in order to meet or exceed stakeholder expectations from a project.” 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

545

PROVIDING APPLICATION SOLUTIONS The project life cycle can be generally broken into three categories as follows: 1. Definition: the earliest part of the project, in which the purpose and requirements are clarified. 2. Planning: the second part of the project, in which responsibility is assigned and work is scheduled. 3. Implementation: the final phase, in which deliverables are produced to meet the project objectives. What Is Risk Management? One of the most beneficial tools for ensuring a successful project is risk management. The PMBOK defines risk management as “processes concerned with identifying, analyzing, and responding to project risk.” Risks can be categorized as follows: y Known risks: risks whose existence and effect are known (e.g., knowing you could get a $100 ticket for driving your car past the inspection date). y Unknown known risks: risks whose existence is known but whose effect is not (e.g., driving away from an officer of the law who is trying to enforce the late inspection sticker). y Unknown unknown risks: risks of which there is no awareness at the present time of their existence and effect (e.g., not remembering to inspect your car and driving it past the inspection date). How Is Risk Management Implemented? Risk management follows a pattern that can be explained using various terms but can be quickly outlined in the following steps: 1. Identify. Risks are identified but not judged as to their scope, magnitude, or urgency. 2. Assess. Risks are evaluated and prioritized based on their size and urgency. 3. Respond. All risks have a corresponding response that ranges from full acceptance of the risk to implementing prevention procedures. 4. Document. The identification, assessment, and response are documented to display the decision-making process used in the project and to assist in knowledge sharing on future projects. Note that risk management is normally initiated in the planning stages of a project (after the project has been properly defined) and continues routinely throughout the implementation of the project. 546

Does Your Project Risk Management System Do the Job? What Are the Common Ways to Respond to Risk? There are three standard ways to deal with risk: 1. Prevention: eliminating the cause before it is an issue 2. Mitigation: completing tasks to lessen the risk, such as implementing a new strategy or purchasing insurance so that the risk of loss is shared with outside investors 3. Acceptance: noting the risk and accepting any consequences it may entail Therefore, action plans could avoid the risk completely, reduce the risk, transfer the risk (insurance), or recognize the risk and take a chance. The key determinant as to whether to take a more stringent approach (e.g., prevention) is dependent on the cost/benefit relationship surrounding that risk. Using the risk categories under the heading “What Is Risk Management?”, the following list provides a response for each known/unknown risk category: y Known risks. If the effect of the risk is large, chart a new strategy to prevent the risk; or if the risk effect is small, mitigate or accept the risk. y Unknown/known risks. First, estimate the effect of the risk and, depending on the projected risk magnitude, use the strategies explained for “known risks.” y Unknown/unknown risks. As much as the likelihood and magnitude of this risk cannot be predicted, it is wise to add a contingency estimate to the project — for example, adding 10 percent of cost to a financial plan for “contingency allowances” without knowing exactly where this reserve will be applied. COMMON RISK MANAGEMENT MISTAKES Now that many of the definitions have been established, focus is now directed to the most common oversights involved in implementing a risk management system. Most projects follow the principle of “if it ain’t broke, don’t fix it.” As a result, small issues early in a project later become major problems. Risk management quickly transforms itself into crisis management, leading to missed deadlines or over-budget situations. These “crisis” situations generally follow a pattern in which the environment was not initially set for risk management to flourish. If only the benefits of a properly functioning risk management system were seen by business owners, they would begin to understand its necessity in any project, which leads to the number-one mistake. 547

PROVIDING APPLICATION SOLUTIONS Mistake 1: The Benefits of Risk Management Are Not Presented to Business Owners Business owners want results and many times do not want to be told of the multitude of issues affecting the completion of their project. They just want the project done, regardless of how a project team gets it done. Business owners tend to stay in a passive role when they should be actively operating in the project. Like homeowners who are having a house built, business owners need to see the work site at set intervals throughout the build process, or they may watch their investment fizzle away. It is recommended that a meeting be called in the early stages of the project to present the concept of risk management and to explain the benefits of this process. Following is a list of key benefits of risk management listed in priority order: y An early warning system for issues that need to be resolved. Risks can be identified either before the project begins or during the course of the project. Once identified, they need to be prioritized, not only as to the effect they may have on the project but also to what level they need to be presented to management for resolution. A properly functioning risk management system can provide a daily assessment of the top-ten risks affecting a project for immediate resolution. If the risks are not assessed routinely, the environment is set to allow these risks to fester. In this environment, a small issue can become a much larger if not a damning problem down the road. y All known risks are identified. Being able to sleep well at night is tough enough these days without having to think about all the unknown risks on a project. Through a properly designed risk management system, all probable and potential issues are likely identified. Reacting to this knowledge is key, but in order to act, a risk and the corresponding resolution must first be identified. y More information is made available during the course of project for better decision making. By identifying all of the problems and projected solutions associated with a project, a deeper understanding of the project's feasibility can be obtained. Especially early on, this information is invaluable and can provide more confidence to business owners and the project team that the project can be achieved on time and within budget. Further, risk management normally leads to improved communication among the project's stakeholders, which can lead to improved team management and spirit. For example, a highly challenging project leads people to bond together to get the job done, just as passengers on a sinking ship need to stick together to save themselves. Finally, this information promotes a learning process for future periods during which risks (and their resolutions) can be reviewed as raw material when similar future projects are underway. 548

Does Your Project Risk Management System Do the Job? Mistake 2: Not Providing Adequate Time for Risk Assessment There is no getting around the fact that risk management provides a layer of management and additional time to the project, which leads to a layer of resources — or does it? It could also be argued that by completing a proper thinking phase up front, there is an application of the rule “Measure twice, cut once.” This phase should not be underestimated inasmuch as many projects tend to work under the principle of “Get it done by a certain date, regardless of the quality of the end product” rather than “Do it right the first time.” In many projects, there never seems to be enough time to do it right the first time but always enough time to do it right the second time. This mistake can be avoided by getting agreement from business owners that the cost in time to implement risk management is fully outweighed by the benefits. Mistake 3: Not Assessing the Most Common Risks in Projects There are risks that are common to all projects, regardless of the industry, based on various studies and research of past project performance. Three groups of common risks are now reviewed and then summarized to arrive at a final list of common project risks. One such group of common risks was created by NASA, which sponsored a study of 650 projects occurring between 1960 and 1970 to identify key factors that led to unsuccessful projects. The major findings were as follows: y y y y y y

Poorly defined objective Wrong project manager Lack of management support Inadequately defined tasks Ineffective use of the PM process Reluctance to end projects

Another study as to why teams fail, completed by the Hay Group and reported in September 1997 in USA Today, reported the following five factors involved in team failure: 1. Unclear goals 2. Changing objectives 3. Lack of accountability 4. Lack of management support 5. Ineffective leadership Now that some failure symptoms have been identified for projects at large, the reasons why IT projects fail should also be reviewed. Rapid Development, a book by Steve McConnell, who spent many years working with Bill Gates at Microsoft, presents several reasons (discussed in the fol549

PROVIDING APPLICATION SOLUTIONS lowing paragraphs) why IT projects fail. For each reason, some methods of prevention have been prescribed. Scope control. Scope in a project can be defined as the range of functionality the end system will provide to the user. Scope is determined by the IT system requirements which, if poorly obtained, can lead to many problems down the road. It must be noted that a scope change of one half could lead to a two-thirds decrease in the project effort. Therefore, the project manager must strive to identify the minimum requirements of the system so as to ensure that a minimum level is obtained before adding “bells and whistles” to the system. These additional requirements that are added to the system are otherwise known as gold-plating and have a detrimental effect on the project because once they are announced to be included in the project, the project could be viewed as a failure if they are not delivered. This is true even if the minimum requirements of the system are met. There are many techniques for increasing the chance of obtaining a complete and accurate set of requirements while also understanding which requirements are the most critical to the final system. Prototyping. Talking about system functionality is well and good, but actually seeing the end product can provide a wealth of new knowledge. Many times, the true requirements of the system are not known until a prototype has been completed. A prototype could be drawn on a piece of construction paper and have no computerized functionality behind the facade. Regardless, this tool should be used on all IT projects prior to the actual design and development of the system to ensure that a common goal is understood before major work hours are expended. Joint application development. Otherwise known as JAD sessions, this occurs when a cross-functional group of all system end users (and business owners) are gathered to review the business reasons for the functional requirements of the final system (what and how the system will perform). Before a JAD session is held, a working document is completed that summarizes the business reasons and functional requirements, based on interviews with key project stakeholders. This list is then reviewed, discussed, and debated by the JAD participants while a scribe documents the discussions. The goal at the end of the JAD session is to walk away with a final set of business reasons and functional requirements that everyone agrees with (given compromises among the JAD participants). The JAD session, from a deliverable standpoint, achieves the main goal of defining requirements, but it also has some added benefits:

y Increases “buy-in” from project stakeholders prior to development y Removes responsibility from the project team to define system requirements (and gives it to end users/business owners) 550

Does Your Project Risk Management System Do the Job? y Increases the quality of the product by arriving at a complete and accurate set of requirements y Improves project estimates by exposing any items within scope prior to the project plan creation (allowing time for proper estimation) Overly optimistic schedules. In today's fast-paced environment, where time is recorded in Web years (which may amount to only a few weeks or months), development speed exacerbates any identified risk. For example, because of the need to meet a predefined project deadline, if a project team rushes the system testing phase, a system may ship with unknown bugs. In this case, the short-term goal of a deadline is reached but the long-term goal of customer satisfaction and company brand image is compromised. In the majority of cases, a predefined date leads to an optimistic schedule. For example, when Microsoft Word was being developed for the first time, it was promised in six months from its initial inception, but took well over three years to finally produce. In this case, a seasoned project manager would submit the three-year plan while the “yes-man” project manager would still be showing a six-month plan well into the second year! Poor team dynamic and programmer heroics. At the start of the millennium, the need for project managers and, more specifically, technology project managers, outstripped the supply, and this gap should only continue in the future. Some key traits of a solid human resource management system are as follows, in that the project team:

y Is provided with challenging assignments y Meets with a career counselor periodically to discuss long-term career progression y Receives generous acknowledgment of successes (other than more work) y Is appropriately matched to the resource requirements of the project y Has a backup for key tasks (for knowledge transfer and to act as a contingency if the initial person leaves the organization) y Does not work under an overly optimistic schedule, leading to “burnout” With regard to “burn-out,” the project team should be on the lookout for team member heroics when a person is expected to complete a task that would normally require months or extensive additional assistance in a shorter timeframe. These situations may be self-inflicted or imposed by a project manager and may not only burn-out the person completing the task (leading, many times, to that person’s departure) but also jeopardize the entire project. Picking the wrong technology or vendor. Business is sometimes seen as a cold, impersonal activity as reflected in the old saying, “Nothing per551

PROVIDING APPLICATION SOLUTIONS sonal — it's just business.” Nothing could be further from the truth, as human nature does not allow us to easily separate the personal from impersonal. This leads to decisions being made not from the standpoint of quantitative analysis but because “the vendor rubbed me the right way.” Many vendors have surmised that they did not need the best product or service — just great advertising and salespeople. Therefore, project teams should be wary of decisions that are based solely on personal judgment rather than on a quantitative decision analysis using a generally accepted method (e.g., Kepner Tregoe). A due-diligence process should have been completed for all major vendor and technology decisions, and for the short list of key vendors, a reference check should be performed and documented. One example of a technology decision gone sour was a company (that will remain nameless) that believed it could settle its Y2K troubles by selecting a package that would, in one weekend, fix the Y2K problem. The cost of the product was high, but it would save months if not years of development and testing — well worth the cost. It was based on a principle that made sense even to those who were not technology savvy. Months went by, and no Y2K work was completed because a solution was always available — or was it? Once the millenium was near, the product's capabilities were further analyzed and, more importantly, existing customers were surveyed. It was determined through these interviews that the product did in fact do its work in a weekend. But, it then took numerous other months to reprogram many of the existing programs to understand the change in the system. Without a detailed analysis, these facts may not have been uncovered until the last hour, leading to tragedy. Summary of common risks to be assessed in projects. From the three lists (and the author's experience) of common project risks, a correlation can be seen, which leads to the top-five risks affecting project success (see Exhibit 1). These risks should be reviewed on all projects to ensure that they are being appropriately addressed.

Mistake 4: Not Identifying and Assessing Risks in a Standardized Fashion As presented previously, there are four steps to implementing a risk management system: identify, assess, respond, and document. Tracking system. One popular method of tracking risks is to begin by having project team members submit their issues in a common centralized database. Although this may sound difficult to establish, one such database could be a simple Excel spreadsheet with various columns for the required information. Exhibit 2 contains sample fields that could be maintained in the database. 552

Does Your Project Risk Management System Do the Job? Exhibit 1. Common Project Risks Top Risk

Quick Response Description

• Unclear or changing goals • Lack of management support

• Complete prototype and JAD sessions to ensure that proper requirements are gathered • Ensure that business owners understand they must be active rather than passive participants to guarantee project success • Review schedules for preset deadlines (without regard to reality) and suggest more appropriate timelines be applied for key tasks • Assign proper resources for the task based on the job responsibilities and ensure employee satisfaction through improved human resource practices • Complete quantitative decision analysis for all major vendor or technology decisions

• Overly optimistic schedules • Inappropriate project team/team dynamic • Selecting the wrong vendor/technology

Monitoring. Once the risks have been identified, they should be monitored weekly (possibly even daily in critical times). The desired result of such analysis is to attempt to ensure 100 percent visibility into the project. One benchmark to follow is to ensure that the top-three risks are analyzed on a weekly basis (even better if the top ten risks are analyzed). To assist the monitoring process, it is helpful to segregate risks between those that relate to the project (which should mainly be reviewed by the project team with some oversight by business owners) and those to which only the business owners can respond.

REVIEW ASSESSMENT AND CONCLUSION Given the top-four mistakes made in maintaining a risk management system, the following questions can be arrived at, asked of the project team, and documented as to the responses to them. In addition to asking the questions, a walk-through should be performed to observe key risk management components. The major questions are as follows: y Have the benefits of risk management been properly communicated to business owners? y Has adequate time been provided for a risk assessment phase of the project? y Has a specific individual been assigned to ensure that project risk management is completed? y Has project scope been finalized and documented through either a prototype or a JAD session? y Have project schedules been reviewed by an independent party for symptoms of schedule optimism (e.g., preset deadlines)? 553

PROVIDING APPLICATION SOLUTIONS Exhibit 2. Critical Risk Information Risk Management Step

Information

Identify

Risk description Identified by Identified when Quantify impact (funds allocated, project work hours, and/or project duration) Quantify likelihood (e.g., likely = 85 percent, probable = 60 percent, and possible = 25 percent) Calculate loss (impact * likelihood) Quantify urgency (related to the need for timely action) Proposed solution (could be prevention, mitigation, or acceptance strategy) Person(s) assigned to complete solution Date of expected resolution Work hours expected to resolve risk Approver that ensures the resolution properly met the risk Contingency plan (plan to enact if proposed solution does not materialize) Trigger event (event for which it is determined contingency plan needs to be enacted)

Assess

Respond

y Based on the tasks at hand in the project, have the appropriate personnel been assigned, both at the project manager level and at the project task level? y Have employee satisfaction techniques been employed, such as career counseling and acknowledgment programs? y Have major vendor and technology decisions been made based on a quantitative, documented decision analysis? y Does a risk management tracking system exist? y If yes, does the system contain all of the critical risk tracking elements (see the table of key elements in Exhibit 2)? y Are risks segregated into those that can be resolved by the project team and those by the business owners? y How often are risks and their proposed solutions monitored?

554

Chapter 44

Managing Development in the Era of Complex Systems Hugh W. Ryan

As information systems do more and reach more users in more locations, complexity and size have become dominant factors in systems development. To many, it appears that every move toward making technology simpler has been matched by a corresponding move toward increased complexity. This is a prime paradox of IT today: on the one hand, technology for the business user has become dramatically simpler as end users have been shielded from complexity; on the other hand, the actual development of systems architectures and business solutions has become far more complex. Distributed computing environments and architectures that span the enterprise have meant that IT work is no longer a point solution for one department or division of a company. In most cases today, a systems development effort takes place with greater expectations that it will have a significant impact on the enterprise’s book of business. Where there is greater potential impact, there is also greater potential risk. Companies are making substantial investments in their technologies; they expect to see business value from that investment quicker, and they expect that the solution that is delivered will be robust enough to serve as a transition platform as technological change continues to compress years into months. 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

555

PROVIDING APPLICATION SOLUTIONS MORE PEOPLE, MORE YEARS The new complexity of systems development can be seen in several ways. First, more people now are involved in development than was seen either in the mainframe development days or in the early years of client/server. Frequently, projects today may involve anywhere from 100 to 500 people, and this figure will continue to increase until thousand-person projects become common over the next several years. Second, the number of years required to develop the more complex business solutions also has increased. Enterprisewide solutions, delivered over several releases, may require three to five years or more to bring all aspects to fruition. This, in turn, adds additional complexities. For example, with longer development periods, the chances are good that management may go through at least one change during the course of the development project. If the project leader has not been careful to communicate and gain sponsorship at many different management levels, a change in management may put the investment at risk. This new systems development environment is what the author’s firm has come to call “large complex systems (LCS).” It is an environment where the solution • • • •

Requires many years to develop Requires a hundred or more people to be involved Is expected to have a significant business benefit Has both a high potential value and a high potential risk

NEW MANAGEMENT NEEDS The author recently has completed a year-long field review that looked in some detail at some of his firm’s largest complex systems development efforts. Based on a set of going-in positions about the challenges of such efforts, extensive interviews were held with personnel at many different levels of the projects. From these interviews, definite repeated patterns about these projects began to emerge, and it became clear that it is possible to set forth a number of factors necessary for a successful implementation of a large complex systems effort. Although a full treatment of all these factors is outside the scope of a single chapter, this one focuses on several that have to do with new ways of leading and managing a large complex systems effort: • Business vision • Testing and program management • Phased-release rollout plan 556

Managing Development in the Era of Complex Systems BUSINESS VISION A vision of a new way of doing business that will result from the large complex systems (LCS) development is critical to the success of the LCS. Although intuition, as well as prevailing business management thinking, would indicate this is so, it was important to see the real benefit of business visions played out. For example, one major project studied was one for a global stock exchange. There, the business vision was an integral part of the project — a crisp articulation of the eight essential capabilities that the final system was to provide. It was integrated into the project training and displayed in all essential documents of the project. Most important, all projects within the larger engagement had to be tied back to the vision and justified according to how they contributed to the realization of that vision. Another development effort at a national financial institution also began with a business vision and a rollout plan that clearly delivered the longterm vision in a sequence of steps. The business vision and rollout plans have served as the basis of work since they were created. The vision deals with the concept of a “model bank” — a consistent set of processes and systems that permits customers around the country to get a standard high quality of service and also permits employees to move within the company without having to learn new processes. The vision is owned by the senior management of the bank and communicated to all employees in powerful yet simple ways. Changes in management frequently can have a negative effect on the power of a business vision. For this reason, it is essential that the business vision be held by more than one individual. On a large complex system built in the United Kingdom, for example, there were several management personnel changes over the years required for the complete development. The key to success here was ensuring that, at any given time, there was a set of senior management personnel committed to the effort. As natural career progression took these people to other roles, there was always another core set coming in who continued to own the vision and push it forward. TESTING AND PROGRAM MANAGEMENT An LCS effort consists of a set of projects with many interdependencies. Many of these interdependencies may be rather subtle, but all of them must work. The traditional approach for determining whether or not things work is systems testing. It is known that traditional testing works well for a single application. However, for an LCS with many projects and many interdependencies, the systems testing approach comes up against some real limitations. Experi557

PROVIDING APPLICATION SOLUTIONS ence in these efforts is showing that it is not reasonable to expect a single project leader to define, design, and execute all the tests that are needed to verify that the LCS works as a whole. It is not the responsibility of individual project leaders to test the LCS release as a whole. An architecture group typically does not have the application skills and user contacts to undertake the testing. In other words, the testing of a large complex system as a whole, using traditional approaches, is an undertaking that has no clear owner. This creates a dilemma. Program management is not positioned to underwrite the quality of the timely delivery of an LCS effort. Individual project leaders cannot be expected to underwrite the quality of the LCS as a whole. At most, they can underwrite that their project works with its primary interfaces. An intense need today, therefore, is to develop the means to underwrite the quality and timeliness of an LCS release as a whole. In practice, successful LCS efforts have found a way to resolve the dilemma. This approach, as it turns out, is actually a synthesis of program management and what is called the “V-model” testing strategy. This new synthesis in LCS engagements is called “engineering management.” Engineering management adds a testing responsibility to traditional program management. This testing role is charged with validating and verifying that the LCS effort works as a whole, as a system of systems, to meet user expectations of a release of an LCS. For example, it will test that when all the online applications are running as a whole, online response time, reliability, and availability meet service level agreements (SLAs). Individual project leaders can be expected to have confirmed that they meet their SLAs. The project leaders often, however, cannot confirm that they continue to meet SLAs when the entire LCS release runs. They do not have access to the rest of the LCS release. They may find it difficult or impossible to create a high transaction volume with multiple LCS applications running in a production-like environment. PHASED-RELEASE ROLLOUT PLAN Finally, one of the critical success factors with LCS development involves a move away from a single release rollout strategy and toward a phasedrelease plan. Only one of the projects reviewed had attempted to use a bigbang strategy and, even in this case, it became apparent that the approach would prove problematic as the conversion approached. The effort encountered significant delays as it reworked the release plan and moved instead to the view that a phased release was most desirable. The remaining projects followed a phased delivery. The phased-release approach serves a number of functions: 558

Managing Development in the Era of Complex Systems • Reduced risk. Using a number of releases with partial functionality can reduce the risk of implementation. At the global stock exchange, for example, initial discussions with the company’s management were key in moving from the riskier single-release approach toward a phased rollout, even though the phased approach appeared to delay the benefits of the system. The balance was the reduced risk of achieving the benefits. • Early verification. A phased approach permits early verification by the business user of essential components as they work with the system in their business. From the review work conducted, it appears that a first release tends to take from 18 to 24 months. Subsequent releases tend to occur in the range of six to 12 months after earlier phases. This is in contrast to a more typical three- to five-year development that a single-release approach may require. The result of the iterations is that the user has worked with the system and provided verification of the value of the system. • Ability to make midcourse corrections. Inherent in the release approach is the ability to review the overall situation as the releases are rolled out, and thereby to make midcourse changes. As noted, an LCS effort can go on for many years, during which time a company and its business environment can go through a great deal of change. The release strategy can provide the means to deal with the changes in a controlled manner. It also can address issues in a long systems development where for periods of time the design must be frozen. • Closer user involvement. Business impact can be seen faster when rollouts are provided to the user earlier through iterations. This allows the user to build up experience with and support for the system, rather than facing a sudden conversion at the end of a large development. The downside of a phased rollout strategy is the significant increase in development cost. To the author’s knowledge, there are no widely accepted estimates of the increase in cost caused by the phased-release approach. However, there have been some evaluations that showed more than a 50 percent increase in costs when a multiple release strategy was compared to a single release. This would seem to suggest not going with a multiple release strategy. The trade-off has a very high risk of failure in a single release combined with the benefits noted previously of a multiple release. The points discussed are key to understanding the value of a release strategy. Phased releases allow a company to reduce risk, increase buy-in, and build a system that is closer to the company’s business needs. The lower apparent costs may make a big-bang approach appear desirable, but the hidden costs and greater risks may prove unacceptable in the longer 559

PROVIDING APPLICATION SOLUTIONS term. It is vital that management involved in this decision carefully weighs the costs and risks of either approach. CONCLUSION When the author began this review of large complex systems, his first thought was that the most important thing to do is simply to figure out how to eliminate the complexity. Based on the two years of review, he is convinced that eliminating the complexity is not possible. Complexity must be accepted as a part of the systems development world for the future. The size of projects that affect the enterprise as a whole tends to be large and will continue to increase. A project that affects the entire enterprise will increase complexity. Only when complexity is accepted will it be possible to come to grips with managing that complexity. Finally, today’s business environment — with its increasing focus on business partners, virtual enterprises, and the global span of business — makes complexity a reality that cannot be overcome. Delivering quality solutions in this environment must start with a recognition that complexity is inescapable. From that point, one initiates a set of strategies to manage the complexity and risk. There is no silver bullet in these strategies. The three points discussed in this chapter are examples of such strategies. Each of them is necessary, but none of them alone is sufficient to guarantee success. From a base of well-defined and -directed strategies, managing the ongoing complexity must become the focus on management in such large complex systems.

560

Chapter 45

Reducing IT Project Complexity John P. Murray

One reason that IT projects fail is that they are too complex. The more complex and lengthy a project, the higher the risk of failure. It would therefore seem that one way to reduce the level of project risk and failure is to reduce the levels of complexity associated with them. The pressures that drive IT project complexity are numerous. IT application project complexity is generated from a variety of sources. It can be taken as an IT project precept that virtually every project will increase in complexity once it has been approved. Two basic issues tend to drive the increase in complexity. First, IT projects are generally approved for development before all the ramifications of what is to be delivered from the project are clearly understood. For whatever reason, there is always an urgency to set a completion date for the project, regardless of how limited the information concerning the scope of the project. Second, once approved, IT projects are too often seen as opportunities to load on any number of ancillary items which, while they may be of benefit, do not justify extending project development time and associated risk. This second item can be considered a cost/benefit issue — namely, that the benefits to be gained from adding on items are usually not justified by the additional risk those items pose to the project. The typical pattern is that everyone involved in an application development project will be found to have contributed to the pressure to enlarge the project and, as a result, add to its vulnerability. Factors such as attempting to gain as much functionality as possible, to correct existing business problems in other production applications, and to incorporate new business practices are often seen as plausible reasons within the business units to encourage project growth. From the IT side, there is interest in using new development software; in moving to more sophisticated operating systems, communications techniques, and hardware; or trying new development methods. Within the senior management group, there is an 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

561

PROVIDING APPLICATION SOLUTIONS interest in improving the organization’s competitive position within its industry, in taking advantage of an opportunity to move to a higher level of customer service, and, often, in an assumed ability to reduce expense. Given the levels of interest in the perceived benefits of the project and the pressure from several areas to expand features and functions, it is easy to understand why IT development applications often increase in complexity as their design moves forward. It is important to recognize the relationship between project growth and increased levels of project complexity and risk. Unfortunately, it is too often the case that that relationship, if indeed it is understood, tends to be forgotten as the project moves forward. People typically get caught up in the euphoria associated with the excitement of a new development project, and, as a result, a march begins to load up the project. It is often assumed that a little more here and a little more there will not add much to the project. Although that may be the case with individual items, combined, those items can lead to considerable difficulty. A typical development scenario often follows a pattern in which a business need is identified and an approach to accommodate the need through the development of an IT application is proposed, taken to senior management, and approved. Almost without fail, once the project concept has been approved, the expansion process begins. Small changes are often presented as being necessary to the improvement of the business issues being addressed by the project. Or, it may be that items are proposed that are not entirely necessary for the business needs to be addressed, but because they are small, or appear to be small and therefore will not require much in the way of time or resources, they are added to the project. The result is a cumulative process that in a short time has added a considerable amount of burden to the project. Usually, although the items have been added, the original project completion date remains the same. From the IT side of the project comes a push to include technology items that, while not necessary to address the business issues of the application, may be nice to have or may form the basis for moving the organization to some new level of technology. Or, worst case, using the technology may be good for the resumes of individuals within the IT department. For example, the project may be seen as an opportunity to move to the use of object-oriented technology, which has not been used before within the IT department. That proposition is likely to be prefaced with a statement such as “this would be an ideal project to get us into the object world.” At that point, several salient questions should be raised. First, what are the business benefits to be found in the use of object-oriented technology, and second, how much will the level of project risk be raised by using the technology? 562

Reducing IT Project Complexity It may be that the project does provide a strong opportunity to move to object technology, and it may be that doing so is in the best interest of the organization. The important issue here should not be seen as the need for movement to a new technology but rather, is this the right time and the right project in which to learn object technology? Too often, perhaps because of their size and complexity, new technology approaches are attached to large IT projects. Taking that approach is almost a guarantee of serious project difficulty. The best place to begin that learning is with a small, noncritical project in which time can be taken to learn and dealing with mistakes will not jeopardize the entire organization. THE TRADE-OFF BETWEEN ADDED VALUE AND INCREASED PROJECT COMPLEXITY The question to ask, relative not only to the use of new technology or approaches but to any other project encumbrances, is: how will this, in terms of lengthening the project or adding complexity, affect the project? If the answer is that the effect will be negative or the answer is not clear, caution should be exercised with regard to opening the project to include those additional items. The argument should not be allowed to move to the validity of the requests or their value. It should be recognized that the proposed additions to the project would all probably be beneficial. The issue is what, in terms of time and complexity, will those items add to the project. The focus must be on the balance between adding the requested items, whatever they may be, and the potential cost in terms of project risk associated with accommodating those requests. In thinking about adding new technology, it is important to keep in mind that either little is really understood about the technology, or that the levels of IT time and effort needed to get it to work are going to be underestimated. In dealing with medium to large IT development projects, given their inherent complexity, adding a technology learning curve portends difficulty for the project. Given that the probability of success with IT applications projects can be increased if the complexity of those projects is reduced, taking an approach of developing more but smaller IT applications should be accorded serious consideration. Assume that a request for an IT application project is made and the request is approved for investigation. After the analysis of the request, it is found that the project as proposed will require 3000 hours of effort to complete. The duration of the project has been established as eight months. Although the time estimated to complete the project at this point in the development cycle is probably arbitrary, that estimate is likely to be seen as absolute. That circumstance, quite common in the development of IT projects, represents another negative aspect of project development that 563

PROVIDING APPLICATION SOLUTIONS adds to the level of risk. Although the setting of arbitrary project completion dates present serious project complications, that is a topic beyond the scope of this chapter other than to acknowledge it as a negative factor. Subsequent to approval, a project team is assembled and work begins on the project. As that work moves forward, requests begin to arise for additional functions within the project. Given the business needs being addressed by the project and the benefits to be derived from the expansion of the project, the levels of staffing to complete the new requirements are approved. At this point, the completion date may be moved out to accommodate the additional work, or, it may be assumed that by adding staff, the completion date need not be adjusted. In terms of outlining the increased needs of the project, adding the needed staff, and in at least attempting to reset the completion date, the project team has done the right things. What has not been done correctly and what will bring about difficulty is that the project team has not, in this situation, considered the increased complexity that has been layered into the project. The assumption is that adding staff and adjusting the completion date will cover the requirement to add features and functions. In this example, the project team, although it has considered the need for additional resources and time to handle the project expansion, has put itself in an unfortunate position. By not recognizing the issues of the expansion and increased complexity of the project as they relate to potential difficulty, if not serious problems, the team has set itself up for, at the least, disappointment. Too often as a project expands and it becomes clear that the project is experiencing difficulty, the focus moves to the issue of adding people and time to meet the project goals. While that is an appropriate focus, it is only a partial focus in that the factors of project expansion and its related additional complexity must also be considered in the analysis. In reality, adding people to the project, whether at the beginning of the project or after it has been determined that the project is in difficulty may be an apparently easy answer to the problem, but it may be the wrong answer. Adding more people to the project increases the level of effort associated with the issue of project management and coordination and, as a result, adds to overall project complexity. The correct way to handle the issues involved in the example would have been, in addition to calling for more staff and time to meet the new project demands, to present and push for the option of dividing the project into more manageable components. That process might have been to structure the project into smaller components (phases), to break up the work into separate projects, or to reduce the scope of the project. Finding the right answer would depend on the circumstances, but the concern should have been with avoiding undue size and, as a corollary, complexity. 564

Reducing IT Project Complexity It is of course true that reducing the size of the project or breaking it into phases would cause the potential benefits associated with the project to be reduced or delayed. Every organization must make decisions about the acceptable size and risk of IT projects. However, in many instances the belief that smaller is better represents a pragmatic approach. Many large, well-intended IT applications projects have floundered. Some of those projects have been scaled back and salvaged, but others, after considerable cost and organizational stress, have been abandoned. DETERMINING PROJECT DEVELOPMENT TOLERANCE WITHIN THE ORGANIZATION Every organization has a particular level of IT application project development tolerance. The level within a given organization depends on a number of items throughout the organization. One approach to improving the IT development project process is to come to an understanding about the practical manageable size of IT applications projects within a particular organization. That determination can be made through a review of past projects. Reviewing the results of past IT applications development projects provides information that can be used to develop success or failure trends, which in turn can be analyzed with reference to project size. Some type of criteria must be developed by which to judge project “success” or “failure.” Those criteria will vary within organizations, depending on those factors that are considered important within the organization. The key with regard to any single criterion used to make the judgments is that it should be seen as a reasonable measure within the organization. In addition, the criteria cannot be allowed to become cumbersome or complex. The goal here must be to select a set of basic components that will provide not the definitive answer, but rather a guideline as to the probable success of a given project based upon past experience within the organization. Where that probability suggests that the project is likely to fail, adjustments must be made to bring project size, time, and complexity to a level that will provide a higher probability of success. As an example, assume that during the past three years, the IT department has worked on 125 projects. During that time, an average of 700 development hours have been devoted to those projects, for a total of 87,500 hours. In doing an analysis of the 125 projects, it is found that, under the criteria developed for determining success or failure, 18 projects fall into the failure classification. Of those 18 projects, 14 have exceeded 900 hours of development time and again, using the success or failure criteria, all 14 projects carried a high level of complexity. In addition, only three projects taking more than 800 hours have been brought to a successful conclusion during the past three years. 565

PROVIDING APPLICATION SOLUTIONS At this point, what the analysis shows is that, within this organization, projects that exceed 900 hours of development time are prone to failure. The result of that analysis indicates that IT development projects in excess of 900 hours of effort seem to be beyond the management capabilities of the IT department. Another fact developed from the analysis showed that over the three-year period, three IT project managers had, regardless of the size or complexity of the project, always brought their projects in on time, within budget, and in accord with the project requirements. Several quick conclusions can be drawn from the analysis. First, it is probably not in the interest of the organization to consider any applications development projects that approach or exceed 900 hours of effort. Second, several project managers are, despite the development environment, able to bring projects to a successful conclusion. Two conclusions can be drawn about the role of those project managers. First, those project managers should be assigned to the largest, most complex projects within the organization. Second, further study should be done to determine why those managers are successful. Where the analysis is carefully done and the criteria used to judge success or failure are reasonable and consistent, patterns of IT project development success and failure can be identified. Done correctly, the analysis of prior applications projects should provide information about the optimum level of project size and complexity with regard to probable success. That should not be seen to imply that whatever that level is it is the best that can be accomplished. What it means is that for the present, a ceiling on project size can be determined. Remaining under that ceiling can bring about an immediate improvement in the management of projects, but it should also be seen as setting the base for moving to new practices that will allow that ceiling to be raised. So, the analysis meets two requirements. It provides a guideline as to the limits of project size and associated complexity within the organization. It also provides the basis for moving forward to bring about changes within the organization, particularly within the IT department, that will lead to the ability in the future to handle larger, more complex applications projects. In the example used, several conclusions can be drawn that can be used to identify IT project development problem areas and to begin to develop plans to raise the levels of development success within the IT department. The material drawn from the analysis in the example shows that in the past, the organization got into trouble when project size exceeded 900 hours of effort. In thinking about that hour range, it should be kept in mind that not only are the hours a factor, but as the size of a project grows, the complexity of that project also grows. An inherent linkage exists between 566

Reducing IT Project Complexity project size and complexity, and they have to be considered as inseparable when examining the causes of project failure. With regard to size and complexity, the analysis also shows that they do not appear to be negative factors in the development of projects if the management of those projects is under the direction of one of the several strong project managers. It would be beneficial to pursue the reasons why some managers appear to do well regardless of the size or complexity of their projects. Determining what those managers are doing correctly and then applying their management techniques to the manner in which other projects are managed could bring significant benefits to the organization. EXAMPLES OF THE FACTORS THAT ADD TO IT APPLICATIONS COMPLEXITY Complexity, when it applies to IT projects, is going to be found in a number of areas. Some of the more prominent areas include the following situations: • The scope of the project exceeds the ability of the organization to handle the work. In other words, the expectations of the people involved (IT and or business people) are unrealistic, given the levels of resources and project development experience within the organization. It sometimes happens that an organization will recognize that it is in over its head and, in an attempt to improve the management of the project, will go outside for assistance. Doing so can be another way that the complexity associated with the project will increase. • There is an extensive use of new (either to the IT department or the industry) technology which is deemed as critical to the success of the project. • The business issues to be addressed by the project are either new to the organization or not well understood within it. • The organization finds itself the victim of a vendor who promotes a series of application packages that are beyond the capacity of the organization to effectively manage. Another aspect of that phenomenon can be that the packages offered by the vendor do not deliver what had been expected and, as a result, a considerable amount of custom work has to be completed to obtain the needed results. • The issue of project “scope creep” is not properly managed. Although a serious hindrance to successful IT project management, scope creep tends to be a common factor in the development of IT projects. Scope creep is just what the name implies — the size of the project is allowed to expand as the project moves forward. The problem here, as previously stated, is that project size relates to increased project complexity; so when scope creep is tolerated, additional complexity is likewise tolerated. 567

PROVIDING APPLICATION SOLUTIONS RECOGNIZING THE COORDINATION DANGERS INHERENT IN OVERLY COMPLEX IT PROJECTS As development projects grow, the number of the factors involved in the successful completion of the project and the complexity of those factors also grows. The problem is not limited to the issue of managing the identified project components along a specified project management time line. That effort, particularly with a large project, will present significant difficulty by itself; the problem becomes much more pronounced because of increases in the external connections to the project that will require close management. Those connections include items such as the transfer of data between existing systems and the new applications. To complicate matters, that data may be in different formats in the different systems. There may also be timing issues in terms of when data needed from one system to another is going to be updated in order to provide current information. Developing the planning for the various interactions and making certain that the data contained in the data streams is and remains current and correct can pose considerable management challenges. The task becomes more complex, because the people responsible for those ancillary systems may not feel a sense of urgency about doing the work required for the support of the new applications. That is not necessarily to imply that the people do not care; it may be that, given their normal workload, accommodating the needs of the new system will not have a high priority. Obviously, delays in progress with the ancillary systems will have a waterfall effect on the development project, which will translate into delay and additional expense. Another way to look at the issue of the connections to other systems is to consider the growth of the number of people involved as the project enlarges. It is possible that the total number of people involved might grow by a factor of three or four, or more with really large projects. As the number of people involved grows, the coordination and communication issues within the development project can become extremely difficult to manage. In that environment, not only is the risk of failure going to rise, but the costs associated with attempting to manage the coordination and communications aspects are also going to rise. Staying with the theme of the difficulties inherent in project growth, it must be recognized that beyond the increased difficulties associated with coordination and communication, the exposure to risk increases as a corollary of the growth. For example, a project of moderate size might involve six or seven key participants to take it to a successful conclusion. With a full-time project manager, the issues of coordination and control among the key group of participants can be fairly easily managed. Assume that the 568

Reducing IT Project Complexity project grows and now, rather than six or seven key participants, the number rises to 14 or 15. In the foregoing scenario, the number of connections relative to moving the project forward has grown considerably. The project manager is now faced with two to two-and-a-half times more people who will have to be included in dealing with project issues and decisions. That growth is very likely to require an additional project manager, so the cost of the project is going to be increased. Cost is not the only issue. While another project manager will help to lighten the load, the issue of coordination and communication between the two project managers must be recognized and managed. The increase in the number of connections as the project grows will not be limited to the people-associated problems; the apparently simple issues associated with the coordination of all the aspects of the project will expand rapidly. There will be additional hardware issues, and there may be the issue of dealing with different systems, data formats, and operating systems, and the timing and use of testing to handle the work being done in the different areas can become major items. Making certain that everything needed is available at the appropriate time now becomes a much larger task than it was before the project grew. Issues associated with the testing phases of the project development process grow dramatically as the scope and complexity of the project expand. What that means in practical terms is that more attention must be paid to the management of the testing processes and to the verification of the testing results. For example, assume that a test is run in one set of programs within the project and changes are required to the programs to correct testing errors. Although those programs are correct, the changes to the application have created the need to make changes in several other applications within the project. So the issue becomes not only one of appropriate unit testing, but also of carrying the testing changes to other areas of the project. Again, the issues of coordination and communication are of serious concern. The issue is not limited to large, complex projects; it is common in many IT projects. However, what raises the level of concern in large projects is that the change environment becomes much more difficult to manage. TAKING STEPS TO CONTROL IT PROJECT COMPLEXITY Discipline must be seen as a critical success factor in any IT development project. When dealing with the issue of the control of IT project complexity, the work can be made considerably easier if an appropriate level of discipline is a component of the development process. The importance of dis569

PROVIDING APPLICATION SOLUTIONS cipline as it relates to project success has to be recognized, and its application must be consistent throughout the entire development process. Moving to a higher level of discipline, particularly in organizations where it has been lacking, can be a difficult task. Attempting to improve the discipline associated with maintaining and controlling the size and complexity of IT applications projects is not easy. Adopting a more disciplined project management approach is likely to open the IT department to charges of being unwilling to provide higher levels of service, of being uncooperative, and of lacking a sense of customer service. In organizations in which there is already a level of hostility between the IT department and other sections within the organization, attempting to raise the level of IT project development will increase that hostility. When it comes to the issue of project discipline, one of the duties of those who have responsibility for the eventual success of the project must be to very carefully assess the size and complexity of what is being proposed within the project. Although there is a natural tendency to want to accommodate any and all requests for features and functions within a particular system, that tendency must be modified by the reality of the possible. Accommodating every request should not be the goal of the project; rather, the goal should be to deliver a reasonable set of functions and features on time and within budget. If project discipline is in place at the beginning of the project, and is consistently maintained throughout the project the eventual result is much more likely to be what everyone had expected. When that occurs, everyone benefits. Assessing proposed IT development projects in terms of the size, features, and functions to be delivered within the established project funding and schedule should be seen as being a joint effort between the IT department and those business units in which the new applications will be used. To be effective, the work associated with coming to a realistic project size cannot be done through a process in which the members of the IT department attempt to mandate project size. The approach has to be to include every area that has an interest in the ultimate result of the project and to work out a compromise that comes as close as possible to meeting the needs of all the areas. Those needs must be met within the context of maintaining reasonable project size and complexity. What constitutes “reasonable” project size and complexity? There will be a different answer to that question for every organization and every project. Each time an IT project is proposed, except for small projects, the issue of reasonableness is going to have to carefully considered. Taking the time to make that consideration should be seen as one component of improved project management discipline. 570

Reducing IT Project Complexity It is understandable, given the pressure on everyone involved to deliver more function and features at an increasingly rapid pace, to want to be as responsive as possible. The apparent way to do that is to include as much as possible into a single project. That issue can be compounded by the need for various features and the very real possibility that, if they are not included in the current project, the opportunity to get them may simply be lost. Where the need is recognized, along with the often real situation that now may be the only time to obtain the particular features, it is very difficult to resist loading as much into the project as possible. However, the important question here has to do with the probability that the project, having grown too large to effectively manage, will fail, and nothing (or perhaps very little) will be delivered. Having come to a realistic assessment of what can be delivered in terms of IT projects within an organization, IT management has both an obligation and a duty to set and hold the line on project size and complexity. If developing the project in phases or as several or even a series of smaller projects makes good business sense, that route should be taken. The idea has to be to assist the organization to avoid making project mistakes. CONCLUSION Although IT projects are inherently complex and difficult to manage, and the larger and more complex the project the more difficult effective management becomes, steps can be taken to mitigate the potential difficulties associated with IT projects. To begin, there must be an awareness within the IT department that project size and complexity represent the primary factors with regard to project success or failure. Organizations must come to some level of understanding as to what the particular culture can successfully absorb in regard to project size. When that understanding has been reached, everyone involved has to be willing to cooperate to make certain that proposed projects do not exceed a size and complexity that presents a reasonable chance for success. Restraining the natural tendency to add as much function and as many features as possible onto every new IT project must be seen as a discipline that must be accommodated within the organization. The goal has to be one of balance, between obtaining the most value possible from the project while avoiding a risk of failure that is too high for the organization. One of the problems is that in many organizations, the ability of the IT department to meet the needs of the organization has been constrained by the use of arbitrary IT funding levels. Those funding levels are usually too low, and the result is that IT cannot meet the business needs of the organization. Where that circumstance has existed over a period of years, the result is often to try to gain at least some of the needed business items through IT project overloading. 571

PROVIDING APPLICATION SOLUTIONS An important aspect of making the changes needed to reduce the risk of IT project failure must be to recognize the cultural issues involved in dealing with the problem. Making the needed changes is going to have to be a joint effort between all areas involved in IT projects. To begin the process, IT management should provide the needed leadership. Preparing and presenting factual information highlighting existing IT project development approaches that lead to difficulty, coupled with a factual analysis of the level of development capability within the organization, constitutes the first step in moving to improvement. The next and more difficult step is going to be to “sell” the benefits of making needed changes throughout the organization. Doing that, as is the case with any cultural change, will not be quick or easy, but IT managers should see making it happen as one of their mandates. In many organizations, developing a case that the current processes are not working will not be difficult. Using that case as the basis of a presentation to senior management for support to begin to take a different approach to the way IT projects are considered and structured within the organization should provide support for needed changes. Where there is a clear, well-presented review of the facts, coupled with a well-developed plan to move to improvements, the probability of the support of senior management will be high.

572

Chapter 46

Software Quality Assurance Activities Polly Perryman Kuver

IMPLEMENTING BASIC QUALITY ASSURANCE Every corporate culture adds its own distinct tone to the methods and procedures put in place to implement basic quality assurance. In some companies, an adversarial relationship between development and quality control might need to be refocused on the premise of a product team rather than a departmental team. In other companies, the collective ego of the development staff might need to be brought into perspective, or the attitude of quality assurance might require adjustment. Before trying to implement basic quality assurance activities, it makes sense to assess what changes, if any, need to be addressed within the organization itself. Once this is done, the next step is to determine how and when the quality activities will occur. This will form the fundamental process that will affect the final value of the software product. Exhibit 1 lists and explains each of the quality activities that can be used to strengthen the development effort and build in quality. These quality activities correspond to the basic development life cycle; that is, the six phases needed to bring a software product into existence. These phases are shown in Exhibit 2. The trick to making them work is to make sure that all of the developers and quality assurance engineers understand what is meant by each of the activities, and that they know when and how the activities will be conducted within the organization. The organization must also address the formality with which the activities themselves will be implemented. Custom software that is developed for private and government customers requires a higher degree of formality than software developed for resale in the commercial marketplace. Formality lends uniformity to a process but does not necessarily make the process better. 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

573

Quality Activity

Development Phase

Value

Impact

Requirements analysis

Requirements

Ensures that all requirements are clear, concise, and testable

Design reviews Test plan creation

Design

Ensures that the design covers all of the requirements set forth by the customer

Code walkthroughs Preliminary testing

Coding and Database

Code is assessed for maintainability per coding standards; database issues are uncovered earlier

Multiple layer testing and evaluation SPR management and verification Certification

Testing

The software is exercised to identify as many problems as possible

Development has a clear vision of what the customers’ expectations for the software are. Customer expectations can be managed from a knowledge base; QA can begin designing meaningful tests cases; development has a roadmap for development; fewer surprise problems occur, and the crisis and hero factors decrease Maintenance plans can be developed; QA learns the product prior to commencement of testing; depth of test cases can be evaluated The product improves

Implementation

Demonstrated proof is provided that the product will work as intended when released to customers

Satisfied customers and increased market share

PROVIDING APPLICATION SOLUTIONS

574 Exhibit 1. Quality Activities

Software Quality Assurance Activities

Exhibit 2. Software Development Life Cycle

In other words, it is just as valid to have requirements documented in a series of e-mails, as it is to have them packaged in a three-ring binder and documented in a special format with lots of words wrapped around them. The important thing is for requirements to be specifically stated, reviewed, clarified, and used to map all activities throughout the development project. REQUIREMENTS ANALYSIS Requirements are the expression of what software is supposed to be and do, once it is developed. They comprise the wants and needs of the software end users. In some companies, it will be the genuine end user who provides requirements. This is generally the case when a business is using an independent company to develop the software it needs, as is common in government, banking, and some retail companies. In other companies, customer needs will be identified by input from marketing and field representatives. This is generally true for commercial product development companies. These companies create software for other companies to use, such as payroll software, graphic software, and databases. In still other companies, the ideas and concepts of employees and management 575

PROVIDING APPLICATION SOLUTIONS might successfully identify a need for a new product line. For example, many companies are meeting today to revamp their products for use on or with the Internet. The diversity of requirements sources is matched by the differences in their expression. One person might state that “the software must process employee records,” while another person requesting the same functionality might state that “the software must allow for employee records to be input and maintained separately from the timesheet records.” While both statements are, in fact, requirements, neither is specific enough to tell the developer what the software is supposed to do, and how it must work to meet user needs. That is why all incoming requirements must be analyzed, regardless of their source. Given that the requirements analysis activity calls for all requirements statements to be evaluated, the process consists of: 1. 2. 3. 4.

Reading the requirements statement Evaluating the clarity of the statement Getting clarification as needed and restating the requirement Determining the completeness of the requirements (e.g., if there are system requirements addressing platforms, runtime, and installation; if there are function and feature requirements; and if there are operational requirements addressing shortcut key and help availability) 5. Determining if a test can be devised to prove the requirement has been met 6. Reviewing the requirements with the person or people who provided them, to find out what they really want 7. Getting agreement from the source, development, and quality assurance on what the requirement means When the final, agreed-upon requirements are compiled into a single list of concise, clear statements, a product baseline is established. This baseline sizes and scopes the product, enabling guidelines to be created under which the product will be developed. DESIGN REVIEWS Once the requirements are known, it is up to development to design a system that meets or exceeds them. It is incumbent upon quality assurance to review the design, to demonstrate the product’s stability as it moves from design to development, and to ensure that the design is testable. There are several ways to manage and conduct design reviews. The method that a particular organization selects must be consistent with the method it uses to design the system. 576

Software Quality Assurance Activities In some companies, design is equal to a prototype system. In this case, the review is conducted with a demonstration of the software and the list of requirements in hand. As developers show that the requirements exist, they are checked off. In other companies, the software developer checks off the requirement as the prototype is developed, and quality assurance personnel merely keep a running total on what has been designed and what is missing. The best method is for the system specifications to document the design in some form. This approach ensures that all significant details associated with the development effort have been addressed before coding is begun. As a result, quality assurance cannot only verify that the design meets the requirements, but it also has a sufficient level of information to develop serious test plans. Because the scope of the developed system is laid out in black and white, the methods for testing become clearer. Getting the design fully documented provides two additional benefits: a shared knowledge base for the team, and a learning tool to prepare for coding and testing. Documenting the design also reduces the risks associated with expanding schedules and big, mid-development surprises. CODE WALKTHROUGHS While code walkthroughs are more critical in some types of development than others, they are helpful in all development efforts. It is during these walkthroughs that minor issues — which might otherwise become big problems — are uncovered and resolved with the lowest impact to the current and future schedules. It is also during walkthroughs that the organization determines whether code is sufficiently commented to ensure maintainability in the future. Those conducting a code walkthrough literally look at and read the code to uncover actual and potential problems as early as possible. Commonly found problems include missing icons, missing methods in a specific object class, undefined methods, and inefficient code structuring that is sure to impact performance. Code walkthroughs also check naming conventions, and verify that all other predetermined standards have been met and that all associated code is ready. Code walkthroughs can be conducted by a team leader or a “build captain” before the submission of a module is approved for a software build. A member of the quality assurance team might also conduct code walkthroughs at prescheduled times during the development effort. The point is that performing code walkthroughs saves time. When conducted correctly, code walkthroughs ensure that only working code makes it into the test environment. 577

PROVIDING APPLICATION SOLUTIONS PRELIMINARY TESTING In fact, code walkthroughs can be easily performed as part of unit testing by the development staff, and then viewed by a quality assurance engineer who will benefit by learning how the code is supposed to behave. Then, when the build occurs and the software is made available in the test environment, analyzing test results becomes more efficient and more meaningful. When software problem reports (SPRs) are submitted, the problem description can be relied upon as a starting point for creating the fix. In addition to monitoring unit testing, the quality assurance team should now be checking and reporting on the number of functions that have been tested, the number that work, and the types of problems encountered. This preliminary testing should focus on areas that development thinks might be problematic, in order to help development find, isolate, and fix problems as early in the development process as possible. When preliminary testing is used to catch the “stupid” bugs, multiple layer testing can be more rigorous in exercising the software, thereby identifying more complex problems that might otherwise cause the user grief. MULTIPLE LAYER TESTING Multiple layer testing eliminates this eventuality by testing the software mechanics and operations, and addressing both coverage and depth. To determine if an organization is conducting multiple layer testing, it must consider whether the application itself is being tested, or just the user interface (UI). If all testing begins and ends with the UI, it means one layer of testing has been eliminated, because no one is checking where data is being stored, if the format of the stored data is being correctly managed, etc. While everything might appear to be fine on the UI side, underneath, the software is corrupting the data that users need. For multiple layer testing, the organization must determine if the mechanics of the software are being tested from the UI. In addition, all of the function and feature requirements specified by the input source must show up and operate as planned. The organization must also determine if user cases are being used to test the software. That is, scenarios should be developed that indicate how the user will be using the system and how the user will exercise the system. Finally, the system performance should be checked and rechecked against the requirements, the software should install correctly, and the runtime for each function should be consistent with the requirements. These multiple layers of testing must be conducted in conjunction with one another. Then the coverage of the software testing is sufficient, and the 578

Software Quality Assurance Activities depth of the testing is adequate to proclaim that the organization practices good, solid-quality testing. SPR MANAGEMENT AND VERIFICATION The number and types of problems identified and described at each layer of testing must be managed to ensure that the most critical items are fixed first. In most organizations, it is very helpful to have a quality assurance engineer participate in or even conduct software problem report triage meetings. Early in the development cycle, the SPR triage could be held weekly but when the delivery date gets ever closer, daily triage meetings will help both the developers and quality assurance. When the evaluation of an SPR reveals that the fix cannot be made on time for the release of the software, it is not just eliminated from the list. Rather, a determination is made as to whether it will be postponed until a future maintenance release, or if it will be moved to the product’s next upgrade release. SPRs that will be fixed should be listed and included in a build schedule. This ensures that none of them is lost somewhere between triage and certification. It also helps to maintain the momentum of the project at a peak until its completion; this is often necessary as the finish line nears. As the build with scheduled SPRs is put into the test environment, the organization must run regression tests to ensure that the fix works, and that the repair did not break anything else in the system. The software is only ready for certification when all of the scheduled SPRs are fixed and the regression tests fail to produce additional SPRs. CERTIFICATION Certification is the point in the project when the software seals the company’s reputation, and might mean different things to different companies. What it should mean to all companies is that the software is fully ready to be shipped to the customer. In some companies, the certification process consists of a meeting of all parties, whereby the paperwork is checked and signed off. In other companies, certification takes place when upper management takes the software, installs it, uses it, and nods affirmatively. Still other companies require a certification team to install and bang away on the software, trying to uncover yet another problem, to offset a possible disaster. Whichever method a company uses, it should be official and followed by a team celebration. After all, traveling through the development life cycle to this point and achieving quality goals in the process is good reason for praise — as well as precious moments of fun and relaxation. 579

PROVIDING APPLICATION SOLUTIONS SUMMARY When a company has clearly defined quality goals and objectives, and employees know what to strive for, then the product improves, and morale and staff retention improve with it. Thus, an organization must determine which quality activities it will strive for, along with the level of quality it hopes to achieve.

580

Chapter 47

Six Myths about Managing Software Development Linda G. Hayes

With widespread frustration over slipping schedules or outright runaway development projects, is an auspicious time to call into question the most basic assumptions regarding management of software development and adopt a strategy that reflects reality. This chapter reviews and debunks these assumptions, outlines an alternate view that takes into account the quantum changes that have occurred in software development within the last decade, and positions managers to succeed into the future. MYTH 1: REQUIREMENTS EXIST That requirements exist is the fundamental assumption of all software development projects. At first they exist in the minds of the users, in documented form from which the design proceeds, and then form the basis for acceptance of the finished project. As logical and hopeful as this may sound, it is generally not proven true. Although users know what they want, it is not unlike the Supreme Court’s understanding of pornography: they know it when they see it. Until then, requirements are inchoate: inarticulate needs that do not, of themselves, describe the features and functions that would satisfy them. What is widely attributed to scope creep, or the continuous expansion of the project requirements, is really just the natural process of distilling user needs into objective expression. For example, a requirement for a banking application might be to maximize the profitability of each customer. Once understood, this requirement spawns new requirements: the need to maximize the number of services sold to a single customer, which leads to the need for a unified database of all customers and services, and relational access among them. 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

581

PROVIDING APPLICATION SOLUTIONS Once the implementation begins, these requirements metamorphose into yet more: users realize that what appear to be many separate customers and accounts are often managed by a single individual, such as a parent creating trust accounts for children, and the requirement arises to define and associate sub-accounts, followed by the need to present consolidated statements and reports … and so forth, and so on. Therefore, if the project manager assumes the requirements can be expressed coherently and completely before development commences, and the entire project plan and schedule are based on that assumption, subsequent changes wreak havoc. Scope creep, then, is really the inevitable realization that requirements are living, growing things that are discovered instead of defined. Reality: Seek Problems, Not Solutions Instead of asking users to describe requirements and solutions, ask them to describe their problems. If this is a new system, find out why it is being created — what is being done today and how will it change with the system? If this is a rewrite of an existing system, ask what is wrong with the old one. What problems will be solved? This is a radically different approach than asking for requirements. A commonly told illustration of this involves the building manager whose tenants complained about slow elevators. After rejecting a series of costly elevator upgrade or replacement scenarios, the manager hired a problemsolving expert. This expert interrogated the manager. What was wrong with the elevators? The tenants, he said, were complaining about waiting for them. Why did that matter, the expert asked? Because if they are unhappy they may move out, the manager responded, and I may lose my job. The expert promised a solution. The next day, mirrors were installed outside the elevators on all floors. The tenant complaints subsided. The expert explained: the tenants were complaining about waiting, not about the elevators. The solution was to make the wait more pleasant, and mirrors offer the most popular pastime of all: admiring ourselves. This solution, of course, cost a tiny fraction of the time and money necessary to speed up the elevators. The point is, if one focuses on the real problem, one will arrive at the best solution. If starting with a solution, the wrong problem may be solved. MYTH 2: DESIGNS ARE DOCUMENTS The next assumption proceeds from the first. If requirements can be captured and subdued into a static state, then the design can be based upon them and 582

Six Myths about Managing Software Development reduced to a written document from which development flows. This assumption fails not only because the first is flawed, but also for an independent reason. It is difficult, if not impossible, to adequately express functionality in words and pictures. Software is interactive; documents are not. There are at least two levels of interactivity of a design. The first is external, at the user interface level: what the user sees and does. A perfectly plausible screen design on paper may prove to be impractical when created. What appear to be trivial matters, such as using the mouse to position focus on a field or object, may render the application awkward and unworkable to a high-speed data entry clerk, trained in touch-typing, whose fingers never leave the keyboard. The second level of interactivity is internal: the hardware platform, operating system, development language, database, network topology, and other decisions that affect how processing occurs and data is managed. An otherwise elegant database design may fail due to response time of the underlying network access protocol, or sheer volume demands. Reality: Go with the Flow Instead of expressing the design as a series of static artifacts — data elements, screens, files, reports — describe it in terms of business processes. What will the user do with it? Users do not think about customer information databases or order entry screens. They think about adding a new customer as the result of an order, answering customer questions about their orders, shipping the orders and sending out invoices, and making sure invoices are paid. They think in terms of processes, information, and workflow: they know their job. Understanding the system as a set of processes as a user experiences it, beginning to end, will lead to a much different design than approaching the subject as a set of disparate entities. It forces the consideration of the flow and purpose of information — not how it is stored, but when and why it is used. Another important aspect of what is done is how many and how often? Will there be 100 customers or one million? What happens most frequently — entering new customers or checking on orders? Are dozens of orders received daily, or thousands? These numbers will greatly influence the internal design of the system, including not just the amount of storage but also the throughput rates. The external design is also affected; screens are designed to support the way they will be needed. Frequently needed information will be readily accessible, and high volume transactions will be streamlined for heads-down entry instead of heads-up aesthetics. 583

PROVIDING APPLICATION SOLUTIONS Do not ask the users what they want to see. Ask them what they need to do. MYTH 3: DEVELOPMENT IS LINEAR If the foundation of requirements was coherent and complete, and the structure of the design solid and stable, development would indeed be a simple, predictable matter. In the traditional, linear development life cycle, coding is a segment that begins after design and ends with test. Yet everyone knows this is not how it is. Our budgets tell us so. Sixty to 80 percent of corporate IT budgets are consumed by maintenance, which is a euphemism for development on existing systems — systems that have already been “released,” sometimes decades ago. There is not an application alive — that is, being used — that does not experience constant development. Whether called modifications or enhancements, the fact is that 25 percent of even so-called stable applications undergo revision each year. This indicates that software systems reflect the business, and successful businesses are in a state of continuous change and improvement. Change can be a good thing, but only if it is planned. Reality: The Schedule Rules Once ready to start creating the system, set a schedule that provides for the earliest possible release of the least possible amount of functionality. In other words, deliver the system before it is ready. Do not come up with the design and then the schedule; come up with the schedule first and design as you go. Do not target for error-free completion; attempt to have something that does something. Sometimes called “time-boxing,” this approach focuses on rapid-fire releases where the amount of functionality in a given release is based on the amount of time, not the other way around. One can think of this as rapid prototyping — it is and it is not. Rapid prototyping usually means throwing together a mock-up that is used as a model for the real thing. Instead, this is the real thing; it is just successively refined. Today’s development technologies make it only incrementally more difficult to create a screen that works than one that does not. In the early stages one might use a “toy” or personal database while nailing down the actual contents, then later shift to an industrial-strength version when the tables settle down. The point is to get users to use it right away. The sooner they use it, the faster will be their feedback. Make sure everyone knows this is a moving target, and do not get painted into any corners until necessary. Stay out of the critical path at first, and let the users report when it is ready for prime time. 584

Six Myths about Managing Software Development Expect changes and problems and plan for them, which means not only releasing early but repeatedly. MYTH 4: DEVELOPERS DEVELOP AND TESTERS TEST The mere fact that there is a title or job description of tester does not mean that testing is only done by testers. Quite the contrary: a major component of the test effort occurs in development. The fact is that only the development organization has the knowledge and information essential for unit, integration, and system testing: testing the individual units, their interaction with each other, and their behavior as a whole. Developers are responsible for creating the software, and they not only should test it — they must. The assumption that only testers test is especially insidious because it shifts responsibility for software quality to those least able to affect it. The testers are not there to check up on development; they are there to protect the business. When development operates under the assumption that they have a safety net, the odds are higher that the system will crash. The real and only reason for having an independent test organization is to represent the users — not to ensure that the software does not break, but that it does what the business needs. Reality: From the End of the Line to the Front In this new paradigm, testing moves from the last line of defense for development to the front line of defense for the business users. It changes from testing to make sure the software runs to making sure the business does. Developers test software; testers test business processes. This means the test cases and conditions are derived from the processes that have replaced the requirements. Testers do not verify that the order entry screen pull-down list of items is sorted alphabetically; they try to enter 100 orders in an hour. Granted, the sorting of the list may dramatically affect productivity if it is not alphabetized, but the focus is on how well the job gets done, not how well the development was done. The design has no meaning outside of its purpose: to support the process. In this scenario, testers are not baby programmers hoping to graduate to real development. They are expert users, making sure that the business needs are served. Developers are not creative, temperamental artistes; they are professionals delivering a working product. The purpose of testing is not to break the system, it is to prove it. 585

PROVIDING APPLICATION SOLUTIONS MYTH 5: TESTERS DETERMINE QUALITY Test organizations generally find themselves in an impossible position. They are asked to determine when or whether the software is “ready.” This is impossible because the testers usually cannot control the quality of the software provided by development or the rate at which problems are corrected. They cannot control what end users expect of it or will do with it. To say that the schedule is out of their hands … well, that should go without saying. The uncomfortable truth is that testers are often regarded as impediments to release, as though they somehow stand in the way of getting the software out the door. This is a dangerous idea because it puts testing in a no-win situation. If they find too many problems, the release to production is delayed; but if they do not find enough, the release fails in production. Reality: Ask, Do Not Tell In the millennium, the business decides when the software is ready, based on what the test group discovers. The rolling release strategy provides for a constant flow of functionality, and the test organization’s role is to constantly measure and report the level of capability and stability of the software. However, it is the business user’s decision when it is acceptable. In other words, the user may elect to accept or waive known problems in order to obtain proven functions. This is a business decision, not a test criterion. This does not absolve development from creating working product or test or from performing a thorough analysis. It does mean that it is not up to them to decide when it is ready. This can work either way; the developers may be satisfied with a design that the users reject, or the users may decide they can live with some bugs that drive the testers up the wall. The key is to remember that the system belongs to those who use it, not those who create it. MYTH 6: RELEASES ARE FINAL The initial release of an application is only the first of many, perhaps over decades. Mission-critical applications are frequently revised monthly, if not more often, throughout their entire lives. The idea that everything the system will ever do must be in the first release is patently untrue. This belief drives schedule slip: that one must hold up or delay the system because it does not do one thing or another, or because it has bugs. The truth is that it will never do everything and it will always be imperfect. The real question is whether it can provide value to the business today and, especially, in the future. 586

Six Myths about Managing Software Development Thus, a software release is not an event, it is a process. It is not a wall — it is a step. Reality: The Rolling Release The concept of a release as a singular, monolithic, and often monster event is an anachronism. Software that truly serves the business is flexible and responsive, supporting competitive agility and rapid problem resolution. Releases are like heartbeats: if they are not happening regularly, the system is dying. Therefore, instead of a one-year development project with a vacuum after that, plan for four quarterly releases followed by monthly ones. During test, make weekly builds available. While this may sound like a pressure cooker, and using traditional methods it would be, it can be properly positioned as a safety valve. Design defects are more easily corrected the earlier they are identified. Additionally, errors or inconsistencies are less annoying if they will be corrected in weeks instead of months. Emotion over missed requirements subsides considerably if they will be coming sooner rather than later. Value is perceived faster, and the potential for runaways is all but eliminated. All of this, of course, drastically changes the nature of testing. RECOMMENDATIONS With a development process based on assumptions that are consistently demonstrated to be untrue, it is no wonder one misses schedules and budgets. The answer, of course, is to throw out the existing process and define a new one based on reality The accelerating rate of new technology and techniques aimed at improving the development process can address the technical hurdles but not the organizational ones. For this rapid-fire, rolling release strategy to work, several things have to happen. Code Speed Although it sounds good to say that developers need to slow down and get it right, the fact is they need to speed up and get it perfect. A case in point is the no-longer simple process of creating a build, or executable. The build process involves assembling all of the individual components of an application and compiling them into a single, working whole that can be reproduced and installed as a unit. With the advent of component-based development, this is no small feat. The build may encompass dozens, if not hundreds, of discrete modules, libraries, and files. As a result, the build can take days, if not weeks, or even months, to get it right. 587

PROVIDING APPLICATION SOLUTIONS In the time-box, rolling release world, builds are done no less than weekly. The only way for this to work, and work consistently, is for the code to be under tight management and control, and for the build process to be strict and streamlined. Speed has a way of burning off fat, and sloppy coding practices cannot survive the friction of this new model. Standard Standards Many development shops have adopted, documented, and published development standards, only to find them in useless repose, stored in binders, never to be referenced again. Without constant training, consistent code inspections, and other oversight practices, standards quickly fall by the wayside. To a developer, standards are straightjackets to be worn only unwillingly. The new millennium will not tolerate nonstandard practices for the simple reason that they will not work. Delivering increments of functionality means that each succeeding layer must fit smoothly with the others: it is like trying to build a brick wall — the bricks must be of uniform size and shape to keep it from falling over. Not to be overly harsh, but maverick programmers will not cause the organization to step up enforcement procedures; they will cause the organization to cull them out. When running a tight train schedule, one does not coddle late passengers … one leaves them behind. Owning Responsibility Users, on the other hand, must step up to the plate and own the system being developed for them. No longer can the test organization serve as a staging area for new hires or misfits, following a random, spontaneous agenda that shifts with time and turnover. It must be an elite corps of experts who bring their professionalism to bear. Nor can users hide behind the excuse that they are not technical and must rely on development to tell them how and what to do when. Nonsense. They must take the responsibility for articulating what they need, assuring that they get it, and deciding when to release it. They pay the price to have it developed; they will pay the price if it cannot be used. SUMMARY While no one questions that development technology is taking quantum leaps almost every day, few question the fact that our process for applying that technology can still be found in a 1950s textbook. This anomaly is crippling our ability to move ahead and it must be exposed and removed. If something quits working, it needs to be fixed … or replaced. 588

Chapter 48

Ethical Responsibility for Software Development Janice C. Sipior Burke T. Ward

Recent events in the software industry signal a changing environment for development organizations in terms of ethical and legal responsibility should software malfunction and cause financial loss or physical harm to the user. Is it realistic for consumers to expect innovative software rich in both features and quality, and what are the implications for developers if users seek recourse for less-than-perfect products? Computer-based systems have become ubiquitous in our daily lives. No longer limited to traditional data processing, software applications abound in the areas of air traffic and other transportation control, communication, entertainment, finance, industrial processes, medical technology, and nuclear power generation, among others. The resulting increase in demand for a diversity of applications has attracted development organizations to a lucrative commercial and retail market. Software may be mass-marketed, canned software sold at retail under a shrink-wrap, included as a part of turnkey systems, specially developed for systems designed to fulfill users’ particular needs, or embedded as control devices. Developing software for such varied uses outside the development organization increases the risk of financial loss or physical harm to users. If software does not function properly, do the users have any recourse? Are programmers, systems analysts, IS managers, or organizations involved in development efforts ethically or legally responsible for poorly functioning software? 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

589

PROVIDING APPLICATION SOLUTIONS Media attention to reports of flawed software exemplify the mounting market expectations for software to function properly, underscoring the increasing potential for users to seek ethical responsibility through legal recourse. Are users justified in their expectations of properly functioning software? Can developers be held to a higher standard of software quality? This discussion focuses on software developed for sale as opposed to inhouse development, because most legal actions would arise between a vendor and purchaser. Although the ethical and legal responsibility of software developers is currently unclear, the threat to seek legal recourse is nonetheless present and on the rise. EXAMPLES OF MALFUNCTIONING SOFTWARE Flaws, resulting in both trivial and severe consequences, have plagued software for decades. A classic example of a serious malfunction occurred in a U.S. nuclear missile warning system on October 5, 1960. Radar sensor input from Thule, Greenland, was erroneously interpreted as a massive attack by the then Soviet Union, with a certainty of 99.9 percent.1 Fortunately, the actual cause of the warning system to issue the attack alert was identified, thus precluding nuclear warfare. The rising moon had caused echoes from the radar sensors, a factor overlooked during development. Unfortunately, software defects are not always detected in advance. A series of tragic incidents occurred in a system developed by Atomic Energy of Canada Ltd. to control radiation doses delivered to cancer patients. Between 1985 and 1987, on-screen editing performed with the up-arrow key caused two modes of operation to be mixed, resulting in a radiation dose more than 100 times higher than the average. At least four patients believed to have received the erroneous radiation overdose died; others were seriously injured. Another malfunction in a medical application caused an overdose of Demerol to be delivered by a patient-controlled analgesic pump, designed to administer narcotic pain medication as needed. As a direct result, the post-elective surgery patient suffered respiratory depression, which caused a heart attack. After lapsing into a coma, the woman died a few days later. The cause was presumed to be an error in the software controlling the dosage by an infusion pump. These life-critical cases have become classic examples in discussions of ethical standards in software development. The focus of concern, however, is no longer only on those applications wherein defects threaten life and safety. As the software industry matures, mounting market expectations are forcing greater ethical and legal responsibility in a diversity of application areas. 590

Ethical Responsibility for Software Development MARKET EXPECTATIONS The software industry is now the third largest industry in the United States, generating revenues of $301.8 billion in 2001, a 16.4 percent increase over 2000.2 Employment in the software industry numbered 697,000 software engineers and an additional 585,000 computer programmers in 2000.3 The leaders in the software industry — Microsoft Corporation, IBM, and Oracle Corporation — are no longer the charming fledgling start-up companies they once were. The industry leader, Microsoft, for example, remains a global giant, generating revenue of more than $28 billion in the fiscal year ending June 30, 2002. Supporting a portfolio of more than 250 products and services in 74 countries are 61,000 employees, including a battalion of engineers, marketers, attorneys, and public relations personnel, to promote and deliver quality products to their customers. With such industry changes comes a difference in what the software market is willing to accept. A recurring example concerns E*Trade, the third largest online broker in the United States. Customers have complained about delays in executing trades. The most severe delay occurred in February 1999, when a change in software used for online trading caused the online trading system to fail for nearly three hours, with some customers experiencing continued disruptions.4 E*Trade did acknowledge a “temporary spike” in complaints at that time, but continues to experience software glitches. In one instance, a customer lost $53,000 in a stock purchase order he placed and later cancelled. E*Trade executed the order anyway, 40 minutes after receiving the cancellation. Another customer claims that E*Trade executed his order to sell his shares twice, instead of once, leaving him short. In both cases, arbitration complaints have been filed by the consumers with the National Association of Securities Dealers. In the case of an inexperienced trader, E*Trade forgave his debt, but refused to repay his initial account deposit. The customer reported that his E*Trade account balance showed he had turned a $12,200 investment into $2.4 million, when in fact he had a negative balance of more than $40,000. One of the more well-known examples, because of its repetitiveness, is Microsoft’s operating system flaws. The latest series of bugs plague the recently released Windows® XP, touted by Microsoft as the most secure version of Windows ever. In December 2001, security flaws were deemed “dangerous” because they could allow computer vandals to seize control of XP machines from anywhere on the Internet.5 In August 2002, Microsoft warned users about another serious security flaw that could cause some of them to lose access to encrypted files and e-mail messages, or make it more difficult to use many Web sites.6 In response, Microsoft released several “service packs” for the operating system, comprised of bug fixes and minor enhancements. However, demand for XP was reportedly soft. NPD 591

PROVIDING APPLICATION SOLUTIONS Intelect, a market research firm, cited retail sales of XP during its first full month of availability were considerably lower than for Windows 98. A SHIFT FROM CAVEAT EMPTOR TO CAVEAT VENDITOR Software has entered more aspects of our lives as consumers have become more dependent on nationwide networks for cash machines, e-mail, inventory management, air traffic control, and other functions. Increases in the demand for a diversity of applications by both commercial and retail markets have changed the ethical and legal environment of development organizations. Many consumers are not sophisticated computer users. Programming is completely foreign to them, and the idea that software could produce erroneous results is totally unexpected. Users, like any other consumers, are beginning to treat defects within the software industry as they do defects in other product purchases. If it does not work properly, take it back. If it causes financial loss or physical harm, seek monetary recovery through the legal system. An ethical shift within the software market from the comfortable world of caveat emptor (buyer beware) toward the high accountability of caveat venditor (seller beware) is occurring. The move along this continuum requires an associated increase in responsibility by software developers to users. Today, caveat emptor is no longer an acceptable stance for software developers. Although developers’ motives still “often have more to do with marketing than with quality assurance,” a shift toward the caveat venditor end of the continuum is evident. Users now increasingly scrutinize software and have higher standards in evaluating the performance of purchased software. As market expectations force the software industry away from caveat emptor, developers correspondingly must continue to be more user oriented by initiating and continually enhancing business actions to improve user satisfaction through product quality. CAN DEVELOPERS IMPROVE SOFTWARE QUALITY? Estimates of the national annual economic costs for faulty software range from $22.2 to $59.5 billion annually, representing an estimate of just under one percent of the nation’s gross domestic product.7 More than half of these costs are borne by users undertaking activities to avoid errors or minimize the effects of errors. The remaining costs are borne by software developers as they engage in software testing requiring additional testing resources due to inadequate testing tools and methods. A definite need to improve software quality exists. Software defects are more prevalent, not just because software itself is more prevalent. Software has become dramatically more complex. No longer is the size of software measured in thousands of lines of code, but 592

Ethical Responsibility for Software Development millions. Further, a widespread move away from highly standardized mini or mainframe systems to networks comprising various brands of hardware and software connected across long distances exists. Coupled with the intricacy of such networks is greater flexibility in user interaction, marked by the prevalence of Microsoft’s Windows. Various tasks can be performed with no predetermined sequence or combination of events, unlike old programs, which performed tasks through a structured series of command sequences. The new-style software and environment requires exhaustive testing to assess every possible sequence, permutation, and combination of events, which is virtually impossible. Additionally, the average market life expectancy for many software products is decreasing. Improving software quality is an important objective in the software industry. In response, the worldwide market for software testing tools will grow from $931 million in 1999 to an expected $2.6 billion by 2004.8 Software testing tools capture the input of a human tester and generate test scripts to be run repeatedly. Errors are detected, and testing resumes once the cause is determined and the fault repaired. However, each subsequent error is more difficult to detect and correct. What constitutes sufficient testing? When should the testing process end? Although testing tools are increasingly available, only about 75 percent of the code in the 60 leading products in the software industry have been tested. In the overall development community, only about 35 percent of the code in a typical application is tested. The top four development organizations, however, are reported to be committed to quality development, detecting up to 95 percent of software defects before delivery to users. Should these developers be lauded or chastised for this error detection rate? Is this an acceptable rate of error detection, or is software wherein 5 percent of its defects persist unacceptable? Error detection focuses on correcting errors once they already exist within the software. A more comprehensive approach is to incorporate defect prevention throughout the development process. Hewlett-Packard Medical Group in Andover, Massachusetts, for example, utilizes special methods such as formal inspections, which require experts to analyze specifications and code according to strict rules and procedures. Another means is through causal analysis, wherein software errors are analyzed to identify their cause. Preventive measures, which extensively consider unusual events associated with the cause, are then incorporated into the development process. Error prevention and detection, however, can add time and expense to the development process. These methods certainly encourage a focus on quality software but may discourage developers from assuming the risk associated with research and development of new technologies, especially in safety-critical applications. In the highly competitive software market, development organizations are driven to develop innovative software rich in both features and quality, 593

PROVIDING APPLICATION SOLUTIONS within a tight schedule. Even developers with the strictest quality control may distribute software with some remaining defects. What defects remain may or may not be known by the developer before the software is released. Users have reported frustrating instances of hours wasted in attempting to get software to perform some simple task, only to have the development organization finally admit to a known problem. Software developers themselves express greater optimism regarding their own abilities to avoid flaws in systems development. Yet, even software considered to be correctly programmed has had serious problems. NASA, for example, experienced several difficulties in life-critical applications (see, for example, Note 9). NASA’s software errors have also been quite costly, as exemplified by conflicting message resolution in the landing sensor on the $165 million Mars Polar Lander, causing it to crash. Indeed, “there are no guaranteed assurances that a given system will behave properly all of the time, or even at some particularly critical time.”9 Nonetheless, software developers must continue to strive to develop reliable software or be held ethically and legally responsible. IMPLICATIONS FOR RESPONSIBILITY Responsibility, based either on voluntary initiatives or on legal requirements, promotes the quest for perfection in an imperfect world. However, we as a society seem willing to accept that software can never be perfect. The presence of flaws in commercial and retail software is simply the price we as users must pay for technological innovation. Those who suffer the consequences of software failure disagree. Are the financial losses and physical harm caused by software defects just random events for which no one is responsible? Ethical Responsibility The software industry has responded to the necessity to improve product quality and user satisfaction by continuously initiating business improvements, which become norms and develop into industry practice. For example, technical support desks for users have become common in the software industry although many organizations charge for this service. Such industry practices continue to develop and advance. Various entities, such as individual companies, industry groups, and professional organizations, formalize the industry practice for improving user satisfaction as ethical codes. The Association for Computing Machinery (ACM), for example, has adopted a Code of Ethics and Professional Conduct for its members. For software developers, this code advocates quality in both the process and

594

Ethical Responsibility for Software Development the products. Ethical codes not only guide conduct within the industry, but also demonstrate the presence of a mechanism for self-regulation. A self-regulatory approach, however, may not always be viewed as sufficient, resulting in an extension of ethical codes into legal regulations. Further, these voluntary ethical codes can be used as a professional standard in a lawsuit. For example, in Diversified Graphics, Ltd. v. Groves, the court awarded $82,500 to the plaintiff, Diversified Graphics, for computer malpractice. In its interpretation, the court recognized that the defendant, Ernst and Whinney, acting as IS consultants and not as CPAs, incorporated the AICPA’s (American Institute of Certified Public Accountants) “Management Advisory Services Practice Standards” in its Guidelines to Practice. These standards are meant to provide guidance to accounting firms, not software development, and thus should not be used to establish a professional standard for IS services. Although this case is an aberration, it nonetheless presents a precedent. Software developers who are not ethically responsible in business dealings with users may be held accountable through the legal system. To decide how much financial liability software developers should be exposed to, the legal system must address the ethical issues of distributive justice and consequence-based ethics. Distributive justice deals with risk sharing or allocation. As movement along the ethical continuum proceeds from caveat emptor toward caveat venditor, a distributive justice view increasingly places the risk for financial loss or harm caused by defective software on the developer. This increase in legal liability is actually a reflection of society’s ethical beliefs, entirely consistent with the legal system’s treatment of manufacturers of other products. There has been a radical shift in risk allocation toward caveat venditor. Consequence-based ethics focus on whether the consequences of such a liability shift are good or bad for society. Clearly, developers would develop and market software innovations more swiftly in a caveat emptor legal environment. Although this may be perceived as a societal good, an additional consequence may be a greater incidence of defective software, potentially causing financial loss or personal injury. The legal system eventually addresses the balance between the competing societal interests of innovation and safety. Legal liability is a serious threat to software developers. In response, developers must be prepared for the heightened expectations of users, emphasizing the need for developers to understand the changing legal environment to which they are subject.

595

PROVIDING APPLICATION SOLUTIONS Legal Implications Currently, there is no uniform law addressing the sale of software and what recourse the consumer has should that software malfunction. Rather, all sales transactions are currently governed by Article 2 of the Uniform Commercial Code (UCC) or by common law. Article 2 applies to contracts involving the sale of goods but does not directly apply to intangibles, such as licenses for software, databases, or other computer information. During the 1980s, the National Conference of Commissioners of Uniform State Laws (NCCUSL) and the American Law Institute (ALI), the co-sponsors of the UCC, attempted to revise the UCC with Article 2B, designed to address software. In 1999, the NCCUSL continued this effort by proposing the Uniform Computer Information Transactions Act (UCITA), a uniform law, separate from the UCC, to govern contracts for computer information transactions.10 These are defined in Section 103 of the Act as “an agreement or the performance of it to create, modify, transfer, or license computer information or informational rights in computer information.” The term does not include a transaction merely because the parties’ agreement provides that their communications about the transaction will be in the form of computer information. The Act is somewhat all-inclusive in scope but does exclude certain specific types of transactions (e.g., financial service transactions). The UCITA has only been adopted by Virginia and Maryland, but as different versions. Strong opposition to the UCITA has prevented other states from adopting it. A major concern of opponents of the UCITA is that it is extremely difficult to understand. It is regarded as “…daunting for even knowledgeable lawyers to understand and apply.”11 Another major concern is the ability of licensors of software to limit their liability in the event of a breach, through disclaimers and limitation of remedy clauses in the contract. The limitation in the software developer’s liability is perceived as a shift in the balance of power to large developers, leaving consumers with no bargaining or remedial power should the software fail. What is regarded as particularly unacceptable is that the limited liability is put into effect through shrinkwrap or clickwrap licenses. A shrinkwrap license is an agreement included inside a retail software package that becomes binding on the consumer when he tears open the shrinkwrap. Thus, the consumer agrees by opening the box to find the license, to which he has already agreed, inside. A clickwrap license is contained in the software itself. When the user initiates use of the software, the license is presented on-screen when the user initiates the software and becomes binding when the user completes the click(s) of agreement with the contract terms. To be able to utilize the software however, the user may already have paid or agreed to pay before the opportunity to read the contractual agreement contained in the software. It should be noted that in certain circumstances, the licensee may return the software, if he paid, or was obli596

Ethical Responsibility for Software Development gated to pay, for the software before the terms of the transaction were available for review. CONCLUSION The increasingly complex computing environment makes it more difficult to develop complex systems, correct in terms of the design specifications, with no defects. Even if it were possible, the development team would certainly be unable to foresee and accommodate unanticipated circumstances that might arise during use. Many software errors have been attributed to human error rather than to the design. Must we accept an imperfect software world? Will users accept the inevitability of flawed software without seeking recourse, even in cases of devastating consequences? Clearly, the ethical environment within which software operates has become more complex, and the potential for errors has increased. At the same time, pressure for software to function properly has been building within the computer software industry. A number of decades ago, the term “bug” was first applied to software defects. This cute little term subsequently became widely accepted, carrying with it both expectations for its occurrence and acceptability for its presence. However, the ethical and legal sands have shifted under the industry’s feet. Gone are the days when users were so delighted that the computer was able to do anything at all and that they were willing to accept the inevitability of program bugs. Notes 1. Belsie, L., “As Computers Proliferate, So Does Potential for Bugs,” The Christian Science Monitor, February 16, 1994, p. 10. 2. Desmond, J.P., “Focus Paid Off in 2001 — Software 500,” Software Magazine, 22 (2), 34–38+, Summer 2002. 3. U.S. Department of Commerce’s National Institute of Standards & Technology, May 2002, “Planning Report 02–3: The Economic Impacts of Inadequate Infrastructure for Software Testing,” http://www.nist.gov. 4. Ip, G., “Casualties in Online-Trading Revolution Are Putting E*Trade on the Defensive,” The Wall Street Journal, June 13, 2001, p. C1. 5. Bray, H., “Security Flaws Mar Windows XP: Microsoft Issues Fix of Bug That Lets Hackers Seize Control of PCs,” The Boston Globe, December 21, 2001, p. C1. 6. Bray, H., “Microsoft Warns of Windows Security Flaw,” The Boston Globe, August 30, 2002, p. D2. 7. Bray, H., “Microsoft Warns of Windows Security Flaw,” The Boston Globe, August 30, 2002, p. 3. 8. Shea, B., “Software Testing Gets New Respect,” InformationWeek, July 3, 2002. 9. Neumann, P., "Are Dependable Systems Feasible?" Commmunications of the ACM, 36(2), 146, 1993. 10. The terms “computer information” and “computer information transactions” were introduced in the UCITA, July 3, 2000. 11. American Bar Association, “American Bar Association Working Group Report on the Uniform Computer Information Transactions Act (UCITA),” http://www.abanet.org, January 30, 2002.

597

This page intentionally left blank

Section 4

Leveraging E-Business Opportunities

LEVERAGING E-BUSINESS OPPORTUNITIES Since the publication of the previous edition of this handbook, we have learned a lot more about how businesses can leverage the Internet and extranets to create E-business opportunities. Today we have entered an era that some writers have referred to as “the second wave of E-business.” A shakedown of dot.com retailers and intermediaries has already been followed by a consolidation phase in which short-term financial profits matter. At the same time, established businesses have begun to transform themselves to integrated online and offline businesses — creating mixed models referred to as “clicks and bricks” (or clicks-and-mortar). Much less obvious to the person-on-the-street, however, is the ongoing investment in B2B applications that link suppliers or customers. The ten chapters in this section of the handbook have been organized into two overall topics: • E-Business Strategy and Applications • Security and Privacy Issues E-BUSINESS STRATEGY AND APPLICATIONS The lead chapter for this topic, Chapter 49 entitled “Building an E-Business Strategy,” provides a methodology for transforming an established business into an E-business. According to the authors, a firm moves through experimentation and integration phases before it is ready for a strategic “Ebreakout” transformation strategy. The authors detail the activities and outputs for a transformation project, based on a SWOT methodology (strengths, weaknesses, opportunities, threats) that addresses both the customer and supplier sides of the value chain. The next three chapters describe the competitive E-business landscape for the second E-business wave that is currently evolving. Today we know a lot more about business models that are most likely to survive and Chapter 50, “Surveying the E-Landscape: New Rules of Survival,” provides a high-level overview of what has been learned about survival in two competitive spaces: the visible and widely discussed applications for businessto-consumer (B2C) E-business and the applications between businesses (business-to-business, B2B) that hold significant potential for bottom-line impacts. Chapters 51 and 52 provide some useful frameworks for assessing B2B opportunities and threats. The technology and business issues associated with implementing B2B E-procurement systems are the subject of Chapter 51, “E-Procurement: Business and Technical Issues.” Although the author’s examples focus on the purchasing of nonproduction MRO goods (mainte600

LEVERAGING E-BUSINESS OPPORTUNITIES nance, repair, and operations supplies), the strategic benefits of E-procurement for both MRO and direct goods are being realized today by many firms. Chapter 52, “Evaluating the Options for Business-to-Business E-Commerce,” presents a framework for understanding the range of E-business modules for many-to-many B2B marketplaces as well as one-to-many private exchanges — the successor to proprietary EDI networks. The author shares his insights about some of the key managerial decisions for considering a portfolio of E-business applications and provides a checklist of questions to help managers assess their own competitive B2B options. The last three chapters have a more technical focus. Corporate networks that utilize Web technologies have now been widely adopted, and Chapter 53, “The Role of Corporate Intranets,” is a basic primer on intranet applications and some of their technical challenges — including the basic distributed model and its security issues. In contrast, the integration of Web data captured from the Internet is an emerging capability. The authors of Chapter 54, “Integrating Web-Based Data into a Data Warehouse,” conceptualize the potential benefits and challenges of integrating Web data into data warehousing systems for decisionmaking support. Finally, Chapter 55, “At Your Service: .NET Redefines the Way Systems Interact,” is a short tutorial on Microsoft’s Web services product strategy. Interested readers should note that related chapters on Web services technologies can be found in Section 3, including a comparison of .NET and J2EE in Chapter 34. SECURITY AND PRIVACY ISSUES The growth of the Internet, combined with a rise in terrorist acts, has heightened the public’s sensitivities to two related organizational and societal issues: information systems security and individual privacy protection. Although legislation to protect an individual’s privacy has existed in the United States for several decades, the pervasiveness of the Internet has significantly increased the actions that need to be taken by IT professionals, legal departments, and other executives to safeguard individuals and organizations. The authors of Chapter 56, “Dealing with Data Privacy Protection: An Issue for the 21st Century,” describe some of the recent legislation by U.S. and European Union lawmakers, and the important roles played by Safe Harbor programs. A checklist of recommended actions for U.S.-based companies to ensure data privacy protection compliance is also provided. Chapter 57, “A Strategic Response to the Broad Spectrum of Internet Abuse,” provides a framework for understanding the range of external and 601

LEVERAGING E-BUSINESS OPPORTUNITIES internal threats, and the related business risks, associated with the use of the Internet in business. One of the authors’ recommendations is to expand the organization’s Chief Privacy Officer role to encompass Internet integrity issues. Finally, Chapter 58, “World Wide Web Application Security,” provides a detailed template for incorporating Web application security into specific application development tasks. As stated by the author, the challenge is to satisfy both the market demands for “customer-intimate” applications and the internal compliance demands for a secure technical infrastructure, as well as a secure user management infrastructure.

602

Chapter 49

Building an E-Business Strategy Gary Hackbarth William J. Kettinger

E-business has the potential to propel a company to “break out” of existing strategic constraints and radically alter business processes, strengthen customer and supplier ties, and open up new markets. However, to achieve this success, a company must rethink corporate strategy in a way that capitalizes on information asymmetries,1 leverages customer and partner relationships, and tailors the right fit of “co-opetition”2 in its business model. To assist companies in their move from traditional to E-businesses, this chapter offers a planning method that charts the path to effective digital business strategy. In the discussion, the authors purposefully use the term “E-business” as opposed to E-commerce, because E-commerce has become synonymous with simply transacting business over the Internet, whereas E-business involves fundamentally rethinking the business model to transform a company into a digitally networked enterprise (Higgin, 1999). An E-business is an enterprise with the capability to exchange value (goods, services, money, and knowledge) digitally. It has properly designed business processes for this new way of conducting business. Further, it understands the human performance challenges not only within its organizational boundaries but also for other people in its enterprise network: customers, partners, and suppliers. E-business is a new way of doing business that involves connectivity, transparency, sharing, and integration. It connects the expanded enterprise through a universal digital medium to partners, suppliers, and customers. It requires the integration and alignment of business processes, technology, and people with a continuously evolving E-business strategy. 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

603

LEVERAGING E-BUSINESS OPPORTUNITIES Exhibit 1. Three Levels of E-Business Level I Experimentation

Level II Integration

Level III Transformation

E-business Strategy

No E-business strategy

E-business strategy supports current (as is) corporate strategy

E-business strategy supports breakout (to be) corporate strategy

Corporate Strategy

E-business strategy not linked to corporate strategy

E-business strategy subservient to corporate strategy

E-business strategy is a driver of corporate strategy

Scope

Departmental/ functional orientation

Cross-functional participation

Cross-enterprise involvement (interconnected customers, suppliers, and partners)

Payoffs

Unclear

Cost reduction, business support, and enhancement of existing business practices, revenue enhancement

New revenue streams, new business lines, drastic improvements in customer service and customer satisfaction

Levers

Technological infrastructure and software applications

Business processes

People, intellectual capital and relationships, cooperation

Supports process efficiency and effectiveness

Information asymmetries used to create business opportunities

Role of Secondary to Information technology

Becoming an E-business does not happen overnight. It typically follows an evolution from initial experimentations with Internet-related technologies to a transformation of the company into an enterprise prepared to compete successfully in the 21st century. As companies evolve, they pass through three distinct levels of E-business strategy development and competence. Challenges must be surmounted to move to higher levels. This chapter discusses the three-levels of E-business and introduces the strategic E-breakout methodology that helps moves an enterprise to “Level 3 — E-business Transformation.” THE THREE LEVELS OF E-BUSINESS STRATEGY MIGRATION E-business strategies pass through three levels of increasing sophistication as shown in Exhibit 1 (Kettinger and Hackbarth, 1999). Some compa604

Building an E-Business Strategy nies are still at Level I (Experimentation), whereby individual departments have taken a technological lead in developing isolated Internet applications. These islands of Internet technology are not tightly tied to corporate strategy or a companywide E-business strategy. For example, the marketing department creates a public relations (brochureware) Web site, research and development uses an intranet for sharing designs, and the purchasing department links its largest suppliers through EDI. While these applications serve parochial interests, senior managers probably have no idea of their payoff nor has their development been considered in light of the overall corporate strategy. Level II (Integration) companies incorporate E-business to support their current business strategies by integrating across functional departments. Their focus is on the direct support of existent business processes. Level II companies are driven by the promise of cost reductions, revenue enhancement, and increased business support of existing operating models. Using technologies such as virtual private networks (VPNs), electronic data interchange (EDI), electronic funds transfer (EFT), and Web-based order fulfillment, these companies achieve linkages with customers and suppliers. The common denominator in these successful Level II E-business companies is a business process view tied directly to their bottom line and a culture that continually hones process efficiency with information technology (IT) advancements. To move from Level I to Level II, companies must make a deliberate effort to link their corporate strategies with the dispersed digital initiatives taking place throughout the company. The vehicle for this integration is business process. It has been the authors’ experience that a thorough review of current or potential customer and supplier process “etouchpoints” reveals opportunities and threats for effective application of Ebusiness technology. Level III companies empower themselves by using E-business strategy to drive corporate strategy. Companies expand inter-enterprise process linkages between customers, suppliers, and partners to create seamless networks. The integration of these linkages is decidedly more transparent, with sharing and trust being much more commonplace. The value chain becomes interconnected in such a way that new revenue streams are identified and developed, customer satisfaction is increased, and customer service is dramatically improved. Level III strategies recognize that people and their intellectual capital give an E-business its strength and flexibility. Becoming a Level III company symbolizes the transformed businesses of the 21st century that use “E-breakout strategies” to build stronger customer ties, exploit intellectual capital, and leverage cooperative relationships with competitors. To achieve this win–win approach, organizations 605

LEVERAGING E-BUSINESS OPPORTUNITIES must continually respond to strategic threats and capitalize on market opportunities. CUSTOMER–SUPPLIER LIFE CYCLE Understanding the company’s business processes is fundamental in transitioning from Level I to Level II. Moving from Level II to Level III requires both an understanding of the company’s business processes but also those of customers, suppliers, and the competition. The customer–supplier life cycle (C–SLC) provides one way of isolating a company’s buying and selling activities to better understand the inter-relationships between customers and suppliers’ business processes and their touch points in the company (Kettinger and Hackbarth, 1997) The overall buying–selling process affects three major areas: (1) the nature of products and services bought and sold; (2) the type of value exchanged between buyers, sellers, and other members of the value chain; and (3) the very definition of a buyer or a seller. A properly implemented Ebusiness strategy, with its associated E-business technologies, may alter the nature of the product or service being offered, its value in the marketplace, or the customer–supplier business relationship. The C–SLC highlights customer and supplier business interrelationships by tracking the process touchpoints of a single product or service’s life cycle. Exhibit 2 describes each process of the C–SLC. The top portion of the exhibit represents the sales processes of the model, while the bottom portion represents the post-sales processes. From a supplier’s perspective, it is important to effectively target the market and advertise for customers, evaluate their product and service requirements and respond to their requests, deliver in a timely manner, and support customers after a sale. Concurrently, customers are searching for product and service information with the intent of more clearly specifying their own requirements, evaluating and selecting a supplier, and ultimately ordering and receiving a product or service. A major point of demarcation in the C–SLC is the sales event. The post-sale E-business portion of the C–SLC includes the manufacturing of products or generation of services, delivery, acceptance and transfer of funds once the customer has acquired the product or service, and post-sales customer support.3 The C–SLC framework is generic to all procurement relationships, not just electronic linkages. However, it is a particularly useful planning tool to help structure a review of existing business processes to determine the potential for turning existing business processes into E-processes. Because every company is both a customer and a supplier, an E-business planning team can use the C–SLC from both the supplier and customer perspectives. From a supplier perspective, many companies too often view 606

Building an E-Business Strategy

3RVW6DOH 3URFHVVHV

6DOHV 3URFHVVHV

6XSSOLHU

Exhibit 2.

&XVWRPHU

1.

Identify Target & Advertise to Potential Customers (eMarketing)

2.

Scan Marketplace and Acquire Product and Service Information (eSearch)

4.

Evaluate Customer Requirements and Determine Capability to Respond

3.

Specify Requirements of a Specific Product or Service to be Purchased (eEvaluation)

5.

Prepare & Respond to Customer Requests (eBid)

6.

Evaluate and Select a Supplier

7.

Order the Product or Service (eOrder)

8.

Deliver the Product or Service and Bill (Physical and eDelivery)

9.

Acquire the Product

11.

Support Product or Service (eCustomer Service)

10.

Authorize and Pay for the Product or Service (ePay)

12.

Evaluate Processes and Improve

13.

Evaluate Processes and Improve

Customer–Supplier Life Cycle (C–SLC): Following the E–Touchpoints

the final order as an end in itself, rather than recognizing the extent of interaction or touchpoints that influence the customer–supplier relationship throughout the life cycle. Improving the process efficiency of each touchpoint will leave the customer hearing a greater sense of confidence and trust in a supplier and should ultimately increase customer loyalty. From a customer’s perspective, planners can use the C–SLC framework to look down the value chain and evaluate their relationships with different suppliers and determine where E-business could help improve these processes to gain more control over suppliers and achieve better pricing and terms. While this is hardly a novel idea, the application of information technology (IT) in E-business has sped transactions and created a degree of interdependence between members of the value chain not seen before. In this context, it is useful to systematically examine all existing customer–supplier touchpoints and consider how process linkages can be improved and possibly redesigned with E-business technologies. The C–SLC is best used in a specific sequence of steps. First, plot a customer’s interactions with the company as supplier. Look at each touchpoint the company has with a customer and ask whether E-business can add value, simplify, or improve that interaction. Next, plot the company as a customer of supplier companies. In this manner, opportunities can be found in the information asymmetries on either side of the customer–supplier relationships. 607

LEVERAGING E-BUSINESS OPPORTUNITIES For example, using the process numberings in Exhibit 2, traditional supermarkets and pharmacies might compete with online retailers by providing online ordering (#7 E-order) and drive-through pickups of groceries and prescriptions (#8 Delivery). They might also provide auto-replenishment (#4 E-bid) and home delivery of routine items by third-party logistics companies. They might provide information kiosks to customers in stores and Web-based surveys to learn more about what requirements customers really want (#1 E-marketing). They may defend against simple click-andmortar online models by developing niche strategies such as carving out an area of their store that feels like a convenience store, with convenienceonly parking, quick checkouts, a guaranteed 15-minute end-to-end shopping experience and, of course, higher prices (#13 Evaluate processes and improve) (Sawhney, 1999). STRATEGIC E-BREAKOUT METHODOLOGY Based on a three-year study of E-business strategy development methods, the authors designed a method to assist companies in rethinking a company’s corporate strategy to maximize its E-business strategy.4 It was found that business leaders require a method that exploits strengths, minimizes weaknesses, capitalizes on opportunities, and minimizes threats to systematically adjust business strategies to rapidly changing business environments. The strategic breakout methodology sequentially follows four logical stages: initiate, diagnose, breakout, and transition. The initiate stage envisions potential strategic change, confirms top management support, and determines a project schedule. The diagnose stage gathers information about the strengths and weaknesses of the company as well as opportunities and threats present within the company’s industry. This stage assesses company and industry processes using the C–SLC life-cycle framework. Finally, this stage benchmarks E-business technologies and scans across industries for best practices and E-business technologies. The breakout stage formulates an E-business strategy with the objective of breaking out of the box by using E-business technology to transform processes and people to better compete in a dynamic global marketplace. The transition stage recognizes the reality that the breakout strategy may not be immediately obtainable because of a company’s unwillingness to change, lack of available resources, or shortage of qualified people. The transition strategy serves as a gap strategy that evolves the company in incremental steps toward the breakout strategy. The strategic breakout method embodies activities (A) and outputs (O). Exhibit 3 shows the major features of the strategic breakout method. The following sections describe the method and provide examples from the authors’ consulting experiences. 608

Exhibit 3. Strategic Breakout Method Major Stages

Initiate

Diagnose

Breakout

Transition

Kick-Off Project

Assess Current Environment

Establish Strategic Target

Plot Migration Path

Industry

Company

Outline project scope Identify project stakeholders Determine project schedule

Conduct industry competitive assessment Benchmark E-business technology Assess industry business partnerships

Identify current business strategies Assess customer relationships Assess supplier relationships Assess E-business technology and architecture Assess business partnerships Current business strategies rankings

Match current business strategies with Industry opportunities and threats and company strengths and weaknesses SWOT matrix Brainstorm alternative Ebusiness breakout strategies

Analyze gap difference between breakout and current strategy Factor in change readiness assessment and cost/benefit/ risk analysis Consider potential industry responses Plot E-business transition strategy milestones

Outputs (O)

Project workplan

Opportunities and Strengths and weaknesses rankings threats rankings

E-business breakout strategy

E-business transition strategy

609

Building an E-Business Strategy

Activities (A)

LEVERAGING E-BUSINESS OPPORTUNITIES

A1: Outline Project Scope • • • •



Identify project sponsor/champion Define objectives Define project plan Gather relevant firm documents -- preparatory inputs (white papers, customer satisfaction surveys, etc.) -- E-business market studies -- customer–supplier feedback information -- customer–supplier publications/ organizational documents Identify critical success factors

A2: Identify Project Stakeholders •

Identify relevant stakeholders, roles, and business units that will be interviewed or observed

A3: Determine Project Schedule • • • • •

Identify key deliverables Determine task force skill requirements Set project schedule Assign project tasks Initiate communication with stakeholders

O1: Project Workplan •

Exhibit 4.

Publish a project workplan

Stage I: Initiate

STAGE I: INITIATE The purpose of the initiate stage is the formulation of an E-business study team that will lay the groundwork to complete follow-on stages and ultimately devise a successful transition strategy derived from the breakout strategy. As shown in Exhibit 4, participants gather initial inputs, outline the scope of the project, and draw up a schedule. Defining the project vision is the most important activity of this stage because it defines the objectives of the E-business study team. The project vision should be 610

Building an E-Business Strategy based in the context of the business goals of the company and the expectations of customers, clients, and business partners. The project vision provides a more focused identification of goals and expectations for the realization of new company strategies and the realignment of company processes with E-business technologies. This step also emphasizes the securing of management commitment and the visualization of strategic Ebusiness opportunities. Outline Project Scope During this activity, the study team develops an E-business vision by establishing project goals and reconfirms senior management commitment. The executive sponsor gathers enough preparatory information to facilitate the success of initial meetings. Information may include the identification of project sponsors, the gathering of existing market studies relevant to Ebusiness efforts, and feedback detailing the exact wants and needs of customers, suppliers, and business partners. A collection of customer and supplier publications, organizational charts, and white papers will be helpful later in defining the scope of the project and envisioning E-business opportunities. Identify Project Stakeholders An important task of this stage is to identify champions and experts who can identify people, roles, and business units crucial to the success of the project. Key is the selection of a facilitator who can communicate and present arguments at the strategic level and act as a business analyst able to analyze research to develop strategic content. Participants are those individuals or business units most likely to be affected by developing Ebusiness strategies that realign people and company processes or add new technologies. Determine Project Schedule At the end of this stage, each member of the E-business study team has a clearly defined job and resources to accomplish his or her assigned tasks. Critical success factors are identified along with expectations, goals, and measurements of success. Goals include the building of an intranet/extranet that encompasses a company’s customers, suppliers, and business partners. Task force training is conducted as needed. For example, task force members may need training in how to perform brainstorming or clustering analysis. Finally, deliverables should be identified and a workplan published. 611

LEVERAGING E-BUSINESS OPPORTUNITIES STAGE II: DIAGNOSE This stage identifies the current corporate strategy of the company and conducts both an industry and a company assessment of customers, suppliers, business partners, and available E-business technologies. These assessments result in (1) an existing business strategic priorities ranking, (2) industry strength–weakness rankings, and (3) company opportunity–threat rankings. These rankings are used to complete the strength–weaknesses–opportunities–threat (SWOT) assessment form used in the next stage by the study team to consider breakout strategies. Use of this format allows direct comparison of strategies with industry benchmarks and company capabilities. The activities and outputs of Stage II are shown in Exhibit 5 and Exhibit 6. The first major activity of the diagnose stage is to document existing business strategic priorities, and order them in importance. Interviews with senior executives will help identify discrete company strategies consistent with the executives’ knowledge of existing processes and the technological infrastructure of the company. The next major activity of this stage requires the team to assess the industry the firm is in as well as collateral industries impacting the firm. The industry assessment is a study of firms in the same industry. Team members should seek to understand the industry’s competitive environment, dominant strategies, competitive opportunities and threats, best practices, and EC technology employed. It is essential to determine who are the industry’s market leaders and who are the industry’s major suppliers and customers. This analysis will lead teams to the identification of leading firms that embody the best strategies, business processes, and EC technologies usage in the industry. Using these leading firms, the project team should be able to make a determination of opportunities and threats present in the industry being examined. Ultimately, such an industry analysis will serve as a benchmark against which the firm’s strengths and weaknesses can be gauged. As the world marketplace becomes more global and interconnected, companies can no longer consider the competitor across regional boundaries. New ideas and threats may come from anywhere within the value chain or by new entrants seeking to leverage E-business technologies. For example, airlines use their initial contact with the customer to influence choices of rental car, hotel and eating establishments. Even in a mature industry (e.g., the railroad industry), companies may use E-business technology to meet a worldwide need for parts, while reducing costs and providing incentives to suppliers and customers to automate their business processes. The complete analysis could focus on multiple geographic regions, specific demographic groups, and congruent products and services. 612

Building an E-Business Strategy

,QGXVWU\$VVHVVPHQW $&RQGXFW,QGXVWU\&RPSHWLWLYH$VVHVVPHQW • • • • • • • • • • • •

Define the industry. What are the competitive boundaries of this industry? Identify the major products/services sold in this industry. Categorize the major customers in this industry. Who will they be in five years? Identify the principal channels through which products/services are sold. Identify best practice by which products/services reach the customer within these channels. Identify elements of E-business (process, people, and information) currently utilized to sell these products/services. Determine the major competitors (rivals) in this industry. What is the intensity of their involvement? Who will be the competitors in five years? Identify the barriers to entry into this industry. Who are the industry leaders? Outline general business strategies the leading firms in the industry follow. Identify substitutes for products/services of the industry that exist or are on the horizon. Determine how firms in this industry add value to products/services throughout their value chain. Identify E-business opportunities and threats perceived within the industry by industry firms.

$%HQFKPDUN (%XVLQHVV7HFKQRORJ\ •

Seek out best industry practices on using E-business technologies for each C–SLC subprocess.

$$VVHVV,QGXVWU\ %XVLQHVV3DUWQHUVKLSV •

Evaluate where business partners present opportunities and threats.

2,QGXVWU\2SSRUWXQLW\DQG7KUHDW5DQNLQJV •

Exhibit 5.

Determine industry opportunity and threat rankings.

Stage II: Diagnose — Industry Analysis

Team members should also search company documents and conduct interviews with key staff to fully understand the C–SLC relationships with the company’s external business partners. These alliances may have been created to outsource unprofitable services or to gain a capability lacking within the company and may require realignment as alternative business strategies are considered. Complementary to the industry assessment is an assessment of the people, technology, information flows, and processes internal to the company. Because many companies truly do not understand the full extent of the interrelationships of their own sales processes, the C–SLC can be used to evalu613

LEVERAGING E-BUSINESS OPPORTUNITIES

%XVLQHVV6WUDWHJ\$VVHVVPHQW $,GHQWLI\&XUUHQW%XVLQHVV6WUDWHJLHV • • •

Review and prioritize current business strategy Review and prioritize current E-business strategy Review current business performance

2&XUUHQW%XVLQHVV6WUDWHJLHV5DQNLQJV

)LUP$VVHVVPHQW $$VVHVV)LUP&DSDELOLWLHV 5HODWHGWR,WV&XVWRPHUV

$$VVHVV)LUP&DSDELOLWLHV 5HODWHGWR,WV6XSSOLHUV

LQWHUPVRISURFHVVHVSHRSOHLQIRUPDWLRQ IORZVWHFKQRORJ\VXSSRUW Determine how: • customers are identified • customers acquire product/service information • customers specify requirements on products/services • the firm evaluates the customers' product/ service requirements • the firm responds to the customers' requirements • customers order products/services • the products/services are delivered • the firm receives payment

LQWHUPVRISURFHVVHVSHRSOHLQIRUPDWLRQ IORZVWHFKQRORJ\VXSSRUW Determine how the firm: • identifies suppliers of products/services the firm requires • specifies requirements on products/ services it has determined it needs to purchase • evaluates and selects suppliers of products/services • places an order for products/services • receives products/services from suppliers • authorizes and makes payments to suppliers

$$VVHVV)LUP %XVLQHVV3DUWQHUVKLSV

$$VVHVV)LUP (EXVLQHVV7HFKQRORJ\



Determine current E-business technology usage Gather E-business performance data





Evaluate current business partners as customers or suppliers

2)LUP6WUHQJWKV:HDNQHVVHV5DQNLQJV •

Determine the firm's strengths and weaknesses rankings

Exhibit 6.

Stage II: Diagnose — Company Analysis

ate and perceive these relationships. Explicit understanding of all the company’s sales processes will aid in assessing whether its corporate strategy is supportable and whether or not E-business can be better applied to specific internal processes. Finally, current and future company architectures should be evaluated for currency, flexibility, and future growth. The Strategic E-Business SWOT Assessment form (see Exhibit 7) may be used to help present ranked business strategies, strengths, weaknesses, oppor614

Building an E-Business Strategy 6WUD

WHJ \ ([S WR6WU ORLWD HQJ WLRQ WKV 6 6

6WUHQJWKV 6ROLG5HSXWDWLRQ

6WUDWHJLF,78VH 4XDOLW\3HRSOH

6 6

6WUDWHJLHV

7KUHDWV

6WREHWKH3UHPLXP 4XDOLW\)UHLJKW&DUULHU 6WR2SWLPL]H,QWHUQDO 3URFHVV(IILFLHQFLHVWR 0D[LPL]H0DUJLQV

6 6 6 6 6 6

6 6 SRU 6WUDWH WXQ LWLHV JLHVW &D R SLWD OL]D W

2S

([SDQG2XWVLGH 6RXWKHDVW86$ &DSWXUH*UHDWHU 6KDUHRI3UHPLXP &DUJR1LFKH0DUNHW %XLOGD7LJKWHU1HWZRUN RI%XVLQHVV3DUWQHUV

,QFUHDVH&DUJRSHU0LOH &DUULHGYLD%HWWHU6FKHGXOLQJ

LRQ

Exhibit 7.

6 6 6 6

6 W U D WHJ \W :HDNQHVVHV ,P R: S URY HDNQH 9DULDQW&DSDFLWLHVRI HP HQW VVHV 6 &DUJR/RDGV 6 'HSHQGHQFHRQ 6 0,6'HSDUWPHQW 6 ,QDGHTXDWH0DUNHW 6 5HVHDUFK 6 /RZ6DOHV3URGXFWLYLW\ 6 DQG*URZWK 6

6WUD WHJ 0LQ \WR7 1HZ(QWUDQWV )HG([836 (QWHU LPL] KUHD DWLR WV 3UHPLXP&DUJR1LFKH0DUNHW 6 Q 6  (&RPPHUFH5HGXFHV1HHG 6 IRU7UXFN7UDQVSRUWDWLRQ 6 &RPSHWLWRUV(PXODWH 6 6WUDWHJLF,7)RFXV 6 /DUJH&XVWRPHUV 6 'LFWDWH3ROLFLHV 6

,QFUHPHQWDO3URFHVV ,PSURYHPHQW

2SSRUWXQLWLHV

6:27(YDOXDWLRQ 6FDOH 8QFOHDU 3RRU )DLU *RRG ([FHOOHQW

Illustrative Strategic E-Business SWOT Assessment

tunities, and threats from the diagnose stage. The SWOT Assessment form offers the project team a snapshot of the company’s strategic position at the present time. It is the contrasting, comparing, and clustering of information from this stage that is used to derive the breakout strategy in the next stage. Business strategies (shown in Exhibit 7 as S1 and S2) act as a focal point for both the company and industry assessments. Misalignment of a company’s corporate strategy with these assessments indicates a possible threat to the company and a weakness that can be exploited by competitors. A proper alignment might correctly indicate that the company is using 615

LEVERAGING E-BUSINESS OPPORTUNITIES its strengths to capitalize the use of E-business technology and might show opportunities for expansion. A thorough review of expected business performance criteria should be collated with the existing business strategies and compared with industry best practices and benchmarks. These existing business strategic priorities rankings are used in the breakout stage and form the basis for linking corporate strategy to E-business strategy. STAGE III: BREAKOUT The objective of this stage is to develop a strategy that breaks away from stereotypical thinking and alters the course of the company, enabling it to emerge as a leader in specific products and services. The E-breakout strategy should consider not only technologies currently on the market, but also future technologies that have the ability to transform the industry as currently visualized. Existing company business strategies, company strengths and weaknesses, and industry opportunities and threats are evaluated using the strategic E-Business SWOT Assessment form (see Exhibit 8). From the diagnose stage, current business strategies (typically five to ten) and SWOT dimensions are systematically evaluated on a scale ranging from unclear to excellent. The next step is to judge the alignment between strategy and each of the SWOT dimensions. Each SWOT dimension is systematically evaluated on a priority scale ranging from unclear to excellent. This scale is intended to help evaluate the extent to which a particular strategy is aligned with a prioritized SWOT component. Brainstorming and scenario development should be encouraged during this activity to test current strategic assumptions. By matching SWOT dimensions relative to current and reordered business strategies, new strategies should begin to emerge. Project study team members and senior executive participants have the option of replacing, adding, or reordering priorities. From this assessment, a shared vision of where the firm needs to go should be documented. The breakout strategy should be built on (at most) a three-year timeframe. As an interesting brainstorming exercise, the team might prepare a brief description of how the firm would appear three years hence, assuming the firm adopts a proposed Breakout Strategy. This three-year time business projection can be painted in many ways (e.g., “A day in the life of a customer or supplier after Breakout;” storyboards, including process flows, people, information, and technology used). Typically the team is faced with three options. (1) It may infuse E-commerce into existing business strategies to satisfy SWOT deficiencies. (2) It may add new E-commerce–centric business strategies that help optimize the company’s overall business strategy set. (3) It may reorder or delete 616

Building an E-Business Strategy

$6WUDWHJLF 3ULRULWLHV5DQNLQJ

$&RPSDQ\2SSRUWXQLW\ 7KUHDW5DQNLQJV

26:27$VVHVVPHQW 0DWULFHV

$,QGXVWU\6WUHQJWK :HDNQHVV5DQNLQJV

$%UDLQVWRUP(EXVLQHVV %UHDNRXW6WUDWHJLHV

2(EXVLQHVV %UHDNRXW6WUDWHJ\ $(YDOXDWH)LUP6WUDWHJLF6:27$VVHVVPHQW • Match current business strategies with industry opportunities and threats and company strengths and weaknesses 26:27$VVHVVPHQW0DWULFHV $%UDLQVWRUP%UHDNRXW6WUDWHJLHV 2)LQDOL]H(EXVLQHVV%UHDNRXW6WUDWHJ\ • Paint an E-business Breakout snapshot of the firm as it would appear three years hence incorporating the changes above • Document the strategy to include changes to people, business processes, information flows, and technologies

Exhibit 8. Stage III: Breakout

old business strategies in conjunction with the first two options. Ultimately, a strategy is agreed to by consensus or fiat. For example, looking at Exhibit 7, most of the company’s strengths are being leveraged; however, the company’s strategy to retrain customers is not being extended to business partners. Opportunities are not being fully capitalized on, and threats are poorly addressed by existing strategies. New strategies that seek better business partner information and interrelationships that lower product service purchasing costs seem to be indicated. STAGE IV: TRANSITION The implementation of a new radical corporate strategy supported by new technologies may be more than some companies want to initially undertake. For many, a radical E-breakout is too much, too fast in terms of 617

LEVERAGING E-BUSINESS OPPORTUNITIES

%UHDNRXW 6WUDWHJ\

0LQXV

)DFWRULQ&KDQJH 5HDGLQHVV$VVHVVPHQW

6WDWXV4XR (TXDOV 6WUDWHJ\

*DS

)DFWRULQ&RVW%HQHILW 5LVN$QDO\VLV

$$QDO\]H*DS'LIIHUHQFH%HWZHHQ %UHDNRXW6WUDWHJ\DQG&XUUHQW6WUDWHJ\ • Consider potential industry responses $3ORW(EXVLQHVV7UDQVLWLRQ6WUDWHJ\,QFOXGLQJ 5HFRPPHQGHG&RXUVHVRI$FWLRQDQG0LOHVWRQHV 2(EXVLQHVV7UDQVLWLRQ6WUDWHJ\

Exhibit 9. Stage IV: Transition

change. People need to be trained, organizations realigned, and new products and services developed. In this context, a gap analysis can be used to help define the differences between the status quo strategic situation and the E-breakout strategy, leading in turn to an E-business transition strategy. This transition strategy is based on the difference between a company’s current capabilities and resources, and those resources and capabilities needed to enact the breakout strategy. It allows time for incremental change and incremental successes. Exhibit 9 shows how a company might approach and implement an E-business transition strategy that supports current and evolving business strategies. The specific activities, tasks, and outcomes of Stage IV are discussed in the following paragraphs. Change Readiness Assessment. This activity evaluates the company’s ability to implement a new E-business strategy based on flexibility measures. People, information, company culture, business processes, and company organization are accessed on a broad level to assess the range of options available to the company. Exhibit 10 is a useful format for evaluating the readiness of a company to adopt and implement the transition strategy as well as ultimately the breakout corporate strategy. Changes in corporate strategy will influence the organizational culture and may be difficult to alter. Organizational structures must be considered in light of their implications in individual role behavior and their links with the environment. Power positions or positional authority all affect how information is stored 618

Building an E-Business Strategy or interpreted. Informal and formal rules of behavior that place information in a situational context also determine how easy or hard it will be to alter the current corporate strategy as well as establishing barriers to change. Cost/Benefit/Risk Analysis. A cost/benefit/risk analysis should be performed on available E-business technologies and the acquisition of resources needed to implement the breakout strategy. Business analysts can analytically decompose the breakout strategy into its component parts. The aim is not to rewrite the breakout strategy but to fully understand the implications of its implementation. Analyze Gap Difference Between Breakout Strategy and Status Quo. A g a p analysis, which defines the differences between the current strategy and the breakout strategy, will lead to an EC transition strategy. The change readiness and cost/benefit/risk analysis factors are introduced to influence the selection of a transitional path. Select a Transition Strategy and Conduct E-Business Implementations . Once the transitional path becomes evident, specific EC milestones should be identified. It is recommended that easily attained milestones be attempted first to build confidence and show early E-business success. The transition strategy should focus on the attainment of a series of milestones. These milestones should be built around timeframes of six months to a year with specific E-business implementation projects defined. Periodically Update E-Business Strategy. Few companies exist for long periods without periodically reinventing themselves. These companies develop a learning culture and value intellectual capital more than technology. They recognize that technology must be strategically driven to leverage information asymmetries (Markides, 1998). Consider a regional trucking company that elects to remove low-margin companies from its customer list to concentrate on those companies willing to adopt more efficient business-to-business processes. It is reapplying the old adage of spending 80 percent of the company’s time on the 20 percent of customers providing the highest revenue to the company. Thus, the E-business strategy concentrates resources on the most profitable customers.

Case in Point A major less-than-truckload (LTL) carrier focuses on information quality and service to customers based in the southeastern United States. The carrier has won numerous “Carrier of the Year” awards. It has done it right by gradually becoming an information-based company. It is a leader in E-business not because it outfitted trucks with sophisticated communication and global positioning systems to track deliveries and pickups, but because it continuously improved business processes in line with overall business objectives. 619

Factor

Question

Support

Expand

Senior management commitment

Is senior management support (1) visibly removed or (5) actively involved in the EC implementation process?

1 2 Visibly removed

3

Manager’s willingness to impact people

Are only (1) modest impacts on people tolerable or is management willing to deal with the consequences of (5) extreme impacts?

1 2 Modest impacts

3

IT resource availability

Are only (1) minimal IT resources available to support EC projects or are they (5) abundant?

1

3

2

4

4

5 Extreme impacts

4

5 Abundant

Minimal

Are personal skills completely (1) 1 2 unrelated to EC or are they (5) easily related to new EC technologies? Unrelated

3

Structural flexibility

Is the organizational structure (1) rigid or is it (5) flexible to change and learning?

1

3

Is the firm historically (1) reactive or (5) proactive?

1

2

4

5 Easily related

4

5 Flexible

Rigid

Reactive

5 Actively involved

In-house expertise availability

Acceptance of paradigm shifts

Transform

2

3

4

5 Proactive

LEAVERAGING E-BUSINESS OPPORTUNITIES

620 Exhibit 10. Change Readiness SET Assessment Tool

Comfort level with new technologies

Is the firm historically (1) slow to respond or (5) quick to implement new technology?

Cultural capacity for Does firm culture support the (1) change status quo or actively seek (5) participatory change?

1 2 Slow to implement 1

2

3

4

5 Quick to implement

3

4

5

Active Change

Status Quo

Cross-functional decision making

Does the firm historically make a decision (1) interdepartmentally or (5) cross-functionally?

1 2 Interdepartmentally

Value chain target

Is the EC implication effort targeted at (1) internal support processes or (5) core processes?

1 2 Internal Support

Total Score

Low Change Factor

3

4

5 CrossFunctional

4

5 Core Processes

High Change Factor

Building an E-Business Strategy

621

LEVERAGING E-BUSINESS OPPORTUNITIES By focusing on quality, the company abandoned a strategy of increasing market share by dropping low-margin customers who refuse to improve their business processes. Not willing to stand still, this company has automated its sales force with laptops and PDAs, moving customers to Web-based EDI via VPN, and providing ERP-capable interfaces between customers and suppliers. The addition of yield management software, similar to that used by the airline industry, has the potential to increase the company’s profit per mile earned. Its strategy of customer first continues to be successful because it scans the marketplace for best practices and maintains employee buy-in at all levels of the organization. By following a transitional strategy that is guided by a “breakout” vision, this company has managed to integrate E-business as a component of its corporate strategy. CONCLUSION A true breakout strategy may never be achieved, yet a targeted transition strategy can allow a company to adapt quickly in a robust and dynamic marketplace. It is critical that managers make the effort to understand both the customer and supplier sides of their company prior to implementing Ebusiness technology. These interrelationships can be extended throughout the value chain to leverage information asymmetries. Focusing on C–SLC is one way of seeing these relationships and visualizing alternative strategies made possible by technology. The more extensive and innovative the business process changes foreseen by the breakout strategy, the more critical it is for managers to complete a business audit of their IT infrastructure as these capabilities influence the speed and nature of the business process change desired within the company (Broadbent et al., 1999). Clearly, all companies are looking to exploit strengths, improve weaknesses, capitalize on opportunities, and minimize threats. Using a systematic methodology to interrelate and compare SWOT factors, companies may reorder or add business strategies via brainstorming techniques to attain the business goals. Properly conceived, a breakout strategy can leapfrog the status quo, propel a company ahead of competitors, and influence the direction of an industry. The volatile nature of technology suggests a window of 12 to 18 months of relative stability before the technological landscape requires reevaluation. Recent writers have suggested the term strategic improvisation to describe the need to rapidly shift priorities within the dynamic global marketplace (Sawy et al., 1999). Therefore, the strategic E-breakout method should be used at least annually to reconfirm industry and company assessments. The value of this method of strategic change is that it allows companies to balance the big picture with short-term limitations and have a process in place to alter E-business strategic direction when the need arises. 622

Building an E-Business Strategy Notes 1. Information asymmetries exist whenever a company leverages information about markets, customers, or its operations that is unusable or unavailable to its competitors. In the information age, customers, suppliers, and competitors share a common flood of business and economic data from multiple communication channels. This information ubiquity pushes business strategists to work even harder to establish “information asymmetries,” whereby they strategically structure competitive relationships characterized by their companies’ possession of higher quality information than is held by competitors, customers, and suppliers (Kettinger and Hackbarth, 1999). 2. The idea of “co-opetition,” as it relates to E-business, was introduced by James Moore (1997). Internet-based ecosystems, such as multi-vendor portals, can create co-opetition, whereby companies cooperate with competitors to offer complementary services or channels, benefiting both parties. In other instances, these same competitors may be important suppliers or customers for a company’s products and services. In other business situations, these companies may be fierce competitors. Balancing these complicated relationships requires a business model that has well-thought-out co-opetition leveraging information asymmetries. 3. It is important to note that even though both the supplier and customer should evaluate the life cycle success of a particular product or service at process step 12 or 13, it is likely that the evaluation procedure is an ongoing activity. For example, if the supplier cannot meet the requirements of a customer, no product or service can be delivered. Thus, a supplier may wish to reconsider the appropriateness of its product or service capabilities in order to induce the customer to buy its product or service. Alternatively, if the customer cannot locate a suitable supplier, the customer may wish to reconsider its purchase requirements or continue to scan the marketplace for a more suitable supplier. 4. The strategic E-breakout method is an archetype incorporating the best concepts of many of the leading E-commerce consulting companies. It was originally developed for instructional purposes in Professor Kettinger’s E-business strategy graduate class at the Darla Moore School of Business at the University of South Carolina. It has subsequently been tested and refined in more than 50 E-business strategy development consulting engagements with firms ranging in size from small start-ups to multibillion dollar enterprises.

References Broadbent, M., Weill, P., and St. Clair, D., “The Implications of Information Technology Infrastructure for Business Process Redesign,” MIS Quarterly (23:2), 1999, pp. 159–182. Cross, K., “The Ultimate Enablers: Business Partners,” Business 2.0, Feb. 2000, pp. 139–140. Higgin, R., “E-Business: Report from the Trenches,” Beyond Computing, Sept. 1999, pp. 46–48. Kettinger, William J. and Hackbarth, Gary, “Selling in the Era of the ‘Net’: Integration of Electronic Commerce in Small Firms, “ in the Proceedings of the Eighteenth International Conference on Information Systems, Atlanta, Dec. 15–17, 1997, pp. 210–215. Kettinger, William J. and Hackbarth, Gary, “Reaching the Next Level of E-Commerce,” Financial Times, London, Special Section on Mastering Information Management, March 15, 1999. Markides, Constantinos, “Strategic Innovation in Established Companies,” Sloan Management Review (39:3), 1998, pp. 31–42. McMullen, Melanie, “Let’s Get Down to Business,” Internet Business, October 1998, p. 15. Moore, James, The Death of Competition: Leadership and Strategy in the Age of Business Ecosystems, New York: Harperbusiness 1997. Sawhney, M., “The Longest Mile,” Business 2.0, Dec. 1999, pp. 235–244. Sawy, O.A.E., Malhotra, A., Gosain, S., and Young, K.M., “IT-Intensive Value Innovation in the Electronic Economy: Insights from Marshall Industries,” MIS Quarterly (23:3), 1999, pp. 305–335.

623

This page intentionally left blank

Chapter 50

Surveying the E-Landscape: New Rules of Survival Ravindra Krovi

This chapter discusses the forces governing the ever-changing competitive landscape in this new economy and identifies several survival factors in both the business-to-consumer (B2C) and the business-to-business (B2B) sectors. The first section previews the competitive landscape and discusses some of the hurdles and challenges facing these companies. The subseqent sections outline survival factors in the B2C and the B2B space. The chapter ends with conclusions and directions for future research. THE COMPETITIVE LANDSCAPE IN THE E-BUSINESS ARENA The competitive landscape in the E-business arena consists primarily of three types of players: clicks (Internet pure plays), click and mortar (brick and mortar companies with online operations), and providers. The clicks (dot.coms) come in various forms. Typically, they may try to sell online (virtual storefronts such as Amazon.com), bring together buyers and sellers (Freemarkets.com), information brokers (Autobytel.com), transaction brokers (such as E*Trade.com), auctions (ebay.com), and content providers (such as WSJ). More recently, however, traditional brick-and-mortar companies have started online ventures (also referred to as click-and-mortar). These companies (like Toys R Us and Charles Schwab) try to leverage their online and offline components — a strategy that has worked well. Some companies in this E-landscape can also be referred to as the providers. During the Gold Rush, the people who made the money were not the miners but rather the people who provided essential services and supplies to the miners. In a similar sense, companies that have provided infrastructure services like Cisco (network hardware), Ariba (B2B procurement 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

625

LEVERAGING E-BUSINESS OPPORTUNITIES software vendor), DoubleClick (Web advertising services), UPS (shipping services), and NewRoads (fulfillment services) also did well. Dot.Com Start-Ups and Failures The often-quoted model of competitive forces by Michael Porter provides a good foundation to understand the new economy. According to Porter, every organization is subject to the impact of competitive forces. These include the bargaining power that buyers and suppliers might have. It also includes the threat of new entrants, substitute products, and the intensity of rivalry among competitors. To succeed in this environment, an organization must have leverage over at least some of these forces. Today’s environment is even more complex due to increased globalization and deregulation. But the single factor that has undoubtedly made everybody sit up and take notice is the Internet. The immediate impact of this has been the proliferation of several small start-up companies, which have created altogether new and unique business models. According to estimates presented at Oracle’s E-business forum, the competitive landscape has altered so dramatically that it is almost impossible to keep track of new competitors. For example, compared to a couple of years ago, there are now 50 new computer resellers, 20 new office product suppliers, etc. Customers are now presented with a wide array of choices they previously never had. Consider, for example, a customer who wishes to purchase a Palm Pilot — the customer can now go to the 3-com store, or register in one of ebay.com’s several auctions, get a low-priced option at buy.com, join a buying consortium like accompany.com, or buy older models such as at outletzoo.com. But not all of these companies could survive. Intense competition among the dot.coms, combined with an ongoing sell-off in dot.com stocks, resulted in a rapid rise in buyouts and bankruptcies. It is no surprise that several of the dot.coms that had initial public offerings in 1995 or later, have become penny stocks. The same is true of companies in the business to-business sector. What started out as a wave of companies with B2B exchanges has whittled down to a few who have identified the keys to success and learned to survive in this complex landscape. A prime reason for the ill health of most dot.coms has been that most companies have not focused on profits at all. In fact, the focus has primarily been on brand building — the hope being that over time, after consolidation, customers will recognize the business as the prominent entity in a field of few competitors. But clicks are finally being forced to recognize that to compete in an already crowded space coupled with higher expectations of returns, they have to do a better job of focusing on profitability as well. This includes finding new ways of attracting customers to their storefronts and then translating these customers into sales. To do so, it is important to go back to the basics such as order fulfillment and customer service. 626

Surveying the E-Landscape: New Rules of Survival Click-and-Mortar Opportunities One of the biggest realizations over the past few years is that the Web holds just as much potential for traditional brick-and-mortar companies as it does for new economy start-ups. Brick-and-mortar companies — which have always been faced with the danger of being “Amazoned” or “dot.commed” — have responded in unique ways. They can leverage both online and offline parts of their business. For example, Williams-Sonoma has a bridal registry that allows customers to register online whereas gift buyers can go to the store and look up a database of all the gifts bought. While there are benefits of having an online and offline presence, there are risks as well. One of the biggest problems for mega-retailers such as Wal-Mart and Kmart is that they also face the risk of a tarnished image because of the inability to translate their offline success into the online market. For example, a company like Home Depot that prides itself on product variety and knowledgeable staff will find it difficult to completely replicate all the features on the Web. In other words, consistency has to be maintained between the online store and the offline store. Also, the expectations for retailers are higher because they have already established customer expectations through their offline operations. Early mover advantage also could quickly turn into a liability if the company is unable to fulfill all orders. To do so effectively requires tight linkages between the online storefront, the organizational ERP, and suppliers. This, to a certain extent, explains Walmart.com’s recent decision to shut down the site for complete renovation. The Power of Strategic Alliances In this new economy, organizations that are able to create strategic partnerships with others are the ones that have flexible business models. Creating successful relationships is also necessary for companies looking to go global. In these cases, the first issue is a consideration of where and with whom. In some countries, the climate for E-commerce initiatives has been friendly early on (e.g. Sweden, Hong Kong, Singapore, and Ireland). Successful companies have also managed to cultivate long-term relationships with local established presences in an effort to leverage the power of an already existing brand name; for example, Amazon’s acquisition of German book retailer ABC Bucherdienst to create Amazon.de; eBay’s acquisition of Alando, another German auction start-up. The E-business technologies have matured considerably (see Exhibit 1): we are slowly converging to the point where the infrastructure technologies are more affordable and accessible. Customer preferences for E-business products and services are becoming very sophisticated. We are also seeing the emergence of new companies that are beginning to proliferate 627

LEVERAGING E-BUSINESS OPPORTUNITIES

Exhibit 1. E-Business Evolution

the B2B space. Given today’s environment of mature technologies and few proven business models, it is important to understand what allows a company to succeed and what does not! The technology only provides the potential. Successful companies have learned to manage the technology and align it with a clear business vision and strategy. Which companies will dominate the B2C and the B2B space in the coming years will depend on how well they have learned the lessons from the early entrants. SURVIVAL IN THE BUSINESS-TO-CONSUMER (B2C) SPACE Typically, site visits are initiated from a referral site (like a portal) or through an ad banner. While it is very difficult to estimate the actual numbers, past experience has shown that click-through rates are usually in the neighborhood of 2 percent. For clicks, the key is not only to attract visitors to the site but also to convert them into actual buyers. For clicks-and-mortar firms, there is the possibility of cannibalization. For example, customers who would have gone to the physical storefront are now making online purchases. In other cases, there are customers who go to the site for researching the product but actually buy the product from the offline store. Hence, the key for these players is to measure the net number of new and unique customers. To identify the success factors for most business-to-consumer commerce companies, it is necessary to visit the business cycle of selling on the net. The life cycle of activities a click must go through is really no different from traditional businesses. These activities include: • • • •

Getting customers to your site Allowing customers to browse your products Fulfilling the order Servicing the customer

The only difference is that for clicks, some of these activities are online or Internet based. Early on, most clicks primarily focused on the first two 628

Surveying the E-Landscape: New Rules of Survival

3URPRWLRQ

)XQFWLRQDOLW\ ,QIUDVWUXFWXUH )XOILOOPHQW

%& 6XUYLYDO )DFWRUV

6HUYLFH

Exhibit 2.

Determinants of Survival in the B2C Space

activities and some completely ignored order fulfillment and customer service. Organizations that have paid meticulous attention to some of these offline components have done very well. Based on this cycle of activities, five key success factors can be identified for B2C (see Exhibit 2). Promotion The challenge for most dot.coms is to attract customers to their site. Given extremely low click-through ratios and further prospect conversion ratios, it becomes crucial to find unique and creative ways to attract customers to the site. In addition to having a dynamic pricing strategy, the most important thing is to target advertising (online or print media) to specific groups based on who is buying from the site and why. This allows for creative ways to customize the site in terms of content as well as experience. Personalization of this form is an excellent strategy to retain existing customers and increase add-on purchases. The ability to identify such patterns of customer behavior, however, really depends to a large extent on how well the organization can integrate and then mine the data from different sources such as server log files, surveys, and online purchases. Functionality While there are several attractive sites, most of them tend to be cluttered with too much information and graphics. While visuals and graphics are important, they may violate the famous “eight-second” rule — which is an observed heuristic used by designers based on the assumption that customers wait for, at the most, eight seconds for a site to load. So, if a site is cluttered with graphics or has a slow upstream connection to its ISP, it is possible that there may be potential customers lost. Jakob Nielsen, a noted usability expert, has outlined several criteria that need to be taken into account by site designers. More important than usability is the bigger notion of functionality. Functionality is what makes a customer experience 629

LEVERAGING E-BUSINESS OPPORTUNITIES Exhibit 3. Functionality and Usability Criteria Content/Layout Ease of navigation Searchability Visibility Checkout process Trustworthiness Other

Colors, Graphics, Layout Frames, scrolling text, standard link colors, non-graphic alternatives, structural navigation Site map, keyword search, product category search, price search Search visibility, sign-in visibility, company information, help visibility Online service, number of traversed pages in checkout, payment methods, order status, product availability Privacy and security guarantees, cookie policy, product and design quality Personalization, download time, mobile connectivity

a more easy and pleasurable one. This includes checkout/order processing, trustworthiness, personalization, etc. Exhibit 3 summarizes several of the usability and functionality criteria. Fulfillment One key to success is the reduction of problems related to logistics (late delivery) and inventory (out of stock). This was very evident in the Toys R Us fiasco in 1999 when they had 1.7 million orders, of which 5 percent could not be fulfilled. To compensate for this, Toys R Us had to pay $100 for each unfulfilled order. Organizations also need to recognize that there is a big difference in fulfilling individual orders (keeping track of millions of customers and smaller orders) versus fulfilling fewer but larger orders to distribution centers and retailers. For several organizations that do not have offline logistics components — especially dot.com retailers — it might make more sense to outsource the fulfillment process to logistics experts instead of trying to build more infrastructure. Service Customers need assistance in several forms. This includes assistance related to finding a product, assistance during the checkout process and assistance after purchase. While it is easy to provide service in a physical storefront, virtual storefronts have to be more creative in their approaches. Some approaches are FAQ files, which can be updated dynamically, and intelligent e-mail management packages such as Kana, which will provide scripted answers to customer queries. More recently, there has been live help buttons on several sites that activate a session window between a representative and the customer. However, it is important to have representatives manning these live help links outside traditional working hours as well. 630

Surveying the E-Landscape: New Rules of Survival Infrastructure Infrastructure is the key to functionality and more so to fulfillment. Most companies have ignored the back-end infrastructure. This includes technical as well as business infrastructure. Some functions are absolute necessities. For example, an order entry system should be able to accept orders for customized products with promised delivery schedules. Further, customers need to be provided with real-time order and shipment status. Billing processes should be able to automatically generate invoices in other currencies using the current exchange rate. Often, it is not just a question of whether an organization has a business infrastructure with warehouses and delivery trucks, but also whether the infrastructure capacity can match the demand. Hence, companies like Wal-Mart have backed off going on the Internet in a full way. Their problem is not so much that of attracting customers to the site but to fulfill them. SURVIVAL IN THE BUSINESS-TO-BUSINESS (B2B) SPACE The B2B market was largely ignored during the initial wave of electronic commerce. Electronic marketplaces increase the options for organizations by bringing in as many buyers and sellers as possible. In some industries, like plastics, they also enable the disposal of used or surplus materials such as resins. This was a lot more difficult to do before because of the searching costs involved in trying to find the right buyer. Due to the cost of searching, procurement managers had to select a product that may not be the best. In some industries such as aviation, exchanges might help buyers configure their product. Electronic marketplaces also can potentially offer value-added services such as personalized content, industry-related news, etc. In return, they typically take a percentage of the transaction fee. The proliferation of online marketplaces has created a quandary for several organizations. Should they be affiliated with the existing exchanges? If so, should they affiliate with an existing vertical exchange, or should they affiliate with an exchange managed by a major partner. If so, how does this change their existing relationships with suppliers and distributors? There is also a recognition that B2B marketplaces may result in disintermediation. In other words, if the manufacturer is able to find a buyer directly, then the need for long-term and expensive relationships with distributors may not be necessary. How can distributors who have dominated traditional supply chains react? Distributors will have to realign their roles and strategies in the new millennium. To combat the effect created by exchanges, distributors will have to create their own hubs. Exchanges have evolved from the “If you build it, they will come” mentality — to developing relationships. The successful exchanges will be the ones that offer the most viable value proposition for organizations. There 631

LEVERAGING E-BUSINESS OPPORTUNITIES

,QGXVWU\ 6WUXFWXUH

3URGXFW3URFHVV (QYLURQPHQW

%% 6XUYLYDO )DFWRUV

6WDQGDUGL]DWLRQ

,QIUDVWUXFWXUH

Exhibit 4. Determinants of Survival in the B2B Space

are four key factors that will determine the survival of exchanges in the immediate future (see Exhibit 4). Industry Structure Exchanges can be useful where buyers need products (e.g., MRO supplies) on a consistent and predictable basis. On the other hand, uncertainty about market demand structures will typically result in inefficiencies such as surpluses and deficits which have to be dealt with immediately. Such inefficiencies can be exploited by an exchange (e.g., Freemarkets), but there is no way to predict long-term sustainability and growth. It is also likely that buyers and sellers who ordinarily have a hard time finding each other will prefer to use exchanges. However, online exchanges or marketplaces are not viable if buyers and sellers are not ready to do business in a whole new way. Another cause of recent B2B failures is that there are too many options for companies to choose from. Process and Product Environment Supply chains that have more nodes or intermediaries will be more difficult to automate on the Internet. In some industries, the nature of the product itself (decays over time resulting in spoilage) is that it will have to be dealt with immediately. It is also important to understand what mechanism is typically used for exchanging goods. Typically, auction models are better suited when there is time pressure for the seller to get rid of excess inventory. This usually occurs in what Kaplan and Sawhney refer to as spot sourcing where the buyer looks to fill an immediate need.1 In some cases, all the exchanges have managed to do is to bring the buyer and supplier together. Once the buyer and seller have established a relationship, there is no need for the exchange anymore. Hence, they need to evolve from mere meeting places to something more valuable. 632

Surveying the E-Landscape: New Rules of Survival Standardization Each company within each industry uses different codes for describing product-related data. For companies that want to create custom catalogs from various suppliers, this involves integrating the disparate data description formats and codes. Clearly, the best way to do this is to standardize across each industry and then invite software vendors to use XML tag specifications based on such standardization. Products that have relatively standard specifications (e.g., computer memory chips) and standards that dictate conformance (e.g., ISO) will be easier to buy and sell in exchanges. On the other hand, products such as plastic resins, which require visual quality control inspections, will be more difficult to deal with in electronic marketplaces. In such cases, the exchange also has to assume this responsibility and guarantee the product quality. Infrastructure The average supply chain is beset with process inefficiencies. Typically, a lot of information is rekeyed across the supply chain, which results in higher order entry error rates. This, coupled with inaccurate projections, results in longer lead times and incomplete shipments. Hence, the secret to business-to-business commerce is the sharing of information across the supply chain. This requires cooperation between all the business entities to create a common platform for integrating existing systems. This ability to efficiently share information across the chain results in lesser excess inventory in warehouses. Manufacturing processes can also be speeded up and customized to buyer specifications efficiently. Most exchanges however cannot tie into suppliers’ back-end systems. This means that even if there are exchanges taking orders, these orders have to be reentered into the supplier’s order processing module in the ERP system. Several E-marketplaces are being forced to deliver on the promised potential of benefits arising from data being exchanged instantly across the supply chain. These exchanges may now turn to B2B infrastructure providers to help them deliver on such promises. Given the recent explosion of electronic marketplaces and exchanges, organizations are taking a newer look at creating inter-enterprise business solutions. For these companies, the challenge is to change age-old processes and conservative management philosophies and redefine existing relationships with external partners. If your organization is considering such E-business initiatives, it is vital to address issues that might very well determine the success of such efforts. These include: • Can you overcome resistance? — Most traditional mainstream companies have a culture where the prevailing mindset is one of technology 633

LEVERAGING E-BUSINESS OPPORTUNITIES being a supporting resource, not necessarily a strategy driver. It is not altogether surprising that E-business initiatives in such companies are not considered to have strategic importance. This, coupled with the recent dot.com meltdown, requires CIOs to present a detailed economic case to upper management. • Can you manage the expectations and the hype? — While there is resistance to E-business initiatives, sometimes the reverse can also be true. A common problem sometimes faced by CIOs is the unrealistic expectations of business units and other functional areas. Usually, such hype is a result of media and vendor descriptions of E-business success stories. It is important to provide a realistic view of what Ebusiness can do for the company. Expectations management will also play an important role in ensuring the success of such implementation efforts. • Should your organization be part of an electronic exchange? — Identify the need to join an exchange. Is it an ad hoc need to get rid of excess inventory or is it a routine need to reduce purchasing expenses? Consider strategic alliances within your industry to share costs. Consider the impact of exchanges on carefully cultivated long-term relationships within the supplier and distributor network. • Do you understand the implementation challenges? — The idea of having a fully functional and linked supply chain base is appealing to all CEOs, but few understand that the implementation of the underlying enterprise computing architecture is anything but simple. Most companies rarely have only one platform and do not build from the ground up. During the course of expansion and diversification, business units adopt proprietary solutions for their needs. Given this, the choice then becomes one of whether to go for a single vendor-based E-business solution or go for a “best-of-breed” approach. A clear assessment of the costs versus benefits involved in integration projects should lead to informed decisions regarding outsourcing versus in-house development. CONCLUSION This chapter started by discussing the competitive landscape in the E-business arena. To compete successfully, organizations must anticipate the winds of change in their respective industries. Competing in this environment requires a key analysis of the forces underlying this post dot.com crash economy. This chapter has presented and discussed survival factors for competing in the B2C and the B2B sectors. Traditional companies looking to implement E-business initiatives must be aware of the implementation risks and challenges that typify such projects. 634

Surveying the E-Landscape: New Rules of Survival References 1. Kaplan, Steven, and Sawhney, Mohanbir, “E-Hubs: the New B2B Marketplaces,” Harvard Business Review, May–June 2000, pp. 97–103.

635

This page intentionally left blank

Chapter 51

E-Procurement: Business and Technical Issues T.M. Rajkumar

The Internet and electronic commerce in particular have much to offer in the way of increasing the efficiencies and competitive advantage of purchasing.1 Many companies are developing plans to integrate some form of Internet-based electronic commerce into their supply-chain management practices that will enable them to develop and maintain a competitive advantage. These advantages are typically in the form of reduced costs, increased efficiencies, a greater degree of accuracy, and speedier processing and delivery. Most E-procurement activities of companies are currently centered on nonproduction — mostly maintenance, repair, and operating supplies (MRO) goods. MRO goods spending accounts for as much as 60 percent of total expenditures for some companies. The Aberdeen group identifies transaction cost savings of up to $70 and reduction of cycle times to two days from seven days when MRO goods are procured via the Internet.2 Croom3 identifies both operational and strategic benefits to using electronic commerce for purchasing MRO items. The operational benefits include reduction of administrative costs in the procurement process and improved audit trails of each transaction throughout the process. Strategic benefits include having greater influence and control over expenditures, raising the profile of the purchasing function, and having a greater opportunity to manage the supply base. Many purchasing executives believe that the long-term benefit of E-procurement will be the freeing of purchasing resources from transaction processing to refocus them on strategic sourcing activities. 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

637

LEVERAGING E-BUSINESS OPPORTUNITIES

&RPSDQ\%

&RPSDQ\$ 3XUFKDVLQJ 6RIWZDUH

,QWHUQHW

(53'%06

Exhibit 1.

(FRPPHUFH 6HUYHU

(53'%06

B2B Purchasing System

Historically, a significant portion of supply chain and business-to-business (B2B) purchasing has been via electronic data interchange (EDI) proprietary purchasing systems and e-mail. Traditional EDI is expensive, because of the proprietary networks required. In addition, EDI has stringent syntax requirements that necessitate a custom integration between each pair of trading partners. The Internet has made the difference in that there is a standard, reliable, and secure universal communication system that companies can use to transact business, instead of a set of expensive, complicated links and proprietary networks. The objective of this chapter is to discuss the technical and business issues in implementing E-procurement systems. The next section provides an overview of the technologies and the difficulties associated with them. The key success factors for implementing these systems are discussed next, followed by the E-procurement systems development life cycle. The chapter concludes with a look at various issues associated with E-procurement systems. E-PROCUREMENT TECHNOLOGIES B2B Overview Exhibit 1 shows an example of a B2B purchasing system. A customer with Company A uses the purchasing software and places an order on Company B’s system. The order is received by an E-commerce server, which is integrated with the back-end ERP/DBMS systems. Similarly, Company A’s system is also integrated with its ERP system to enter its data into the accounting and planning systems. The communication between the two systems takes place via the Internet. For this system to work, both Company A and B must integrate their back-end systems with the Web. Many smaller sup638

E-Procurement: Business and Technical Issues pliers are less inclined to integrate their ERP system with their E-commerce Web servers, preferring instead to reenter the data manually into the ERP system. In such cases the benefits of using E-procurement technologies are not fully realized. There are various purchasing software technologies. The following are the most important:4 E-procurement (buyer software), E-catalog, auctions, and marketplaces — also known as net markets/exchanges software. However, companies’ solutions many times straddle more than one of these options. A brief description of each of these technologies follows. E-Procurement Buyer software enables users to automate transactions and focus on the buying organization’s activities, such as order placement, catalog management, payment, reporting, and so on. Most of these systems currently handle MRO products. Users typically access the client software on their desktop, providing access to E-catalogs that are customized for their organization. Users typically source from preferred suppliers listed in their catalogs, within limits enforced by purchasing management.5 If a person purchasing an item does not have the authority to buy it, these systems route the document to appropriate channels and manage the workflow. The purchasing limits and approval routings are stored as profiles for users within the system. Systems such as Ariba and Commerce One typically fall into this category. Such systems can be integrated with back-end ERP and database systems. In such cases, these systems will perform general ledger updates, payment, and so on. E-procurement systems generally must be capable of integrating multiple supplier catalogs into an aggregated, buyer-managed view of the catalog. They enable review of product purchase patterns and deliver knowledge that can be used to facilitate supplier negotiations. These systems enable purchasing to automate most of the transaction processing, as well as to reduce cycle times, limit reductions in off-catalog buying, and free purchasing to focus more on activities such as strategic sourcing. However, these systems have their own drawbacks. They are fairly costly (as much as a few million dollars) to implement, and it is cumbersome to maintain catalogs.5 Still, many companies choose to implement them because they give purchasing the opportunity to reengineer the buy process for MRO items. Similar to buying organizations having E-procurement software, suppliers need software on the sell side that can cater to these buyer systems or exchange information with marketplaces. For example, the sell-side systems need to provide information in the format needed by the buy system catalogs. 639

LEVERAGING E-BUSINESS OPPORTUNITIES E-Catalog The E-catalog is the most widely used of the purchasing technologies. At its simplest, it automates customized, printed supplier catalogs. E-catalogs provide a better audit trail and may actually be tied to legacy systems for data capture from transactions, or linked to workflow systems for approval routing. Use of individual supplier electronic catalogs does, however, have its practical limitations. In environments where end users are free to choose from among many suppliers, accessing multiple, disparate electronic catalogs and choosing among them can be problematic. Users must not only know which suppliers can provide the needed goods or services, but also be familiar with a wide variety of catalog formats and data access mechanisms. In addition, product specification and cost comparisons may not be easy. A drawback from the purchasing organization standpoint is that multiple linkages must be established to capture the transactional data from different supplier catalogs. The most common deployed solution for such environments is catalog aggregation; that is, combining the offerings from all approved suppliers into a centrally maintained catalog. This solution overcomes the data access and integration problems and allows a more product-centric search. It is costly, however, due to the high set-up and maintenance costs. Because it costs a lot to aggregate suppliers, the frequency of orders must be high enough to cost justify the inclusion of some supplier data. These aggregation technologies assume that suppliers will provide information to the purchasing organization in a particular format. The data that suppliers provide must not only be readable, it must be machine understood so that search engines can find the product and comparisons of specifications and prices can be done. Unfortunately, most supplier-provided catalogs do not meet both these criteria.6 The solution is in using virtual catalogs, wherein individual online supplier catalogs are not aggregated or duplicated. Virtual catalogs dynamically retrieve information from multiple catalogs and present the data in a unified manner with its own look and feel — not that of the source catalog. Virtual catalogs do not actually contain any product information on their own. Rather, they contain information pertaining to the contents of online supplier catalogs and linkages to the actual product information contained therein. Once this communication protocol has been established, it is possible to answer user inquiries with responses combined, in a consistent manner, from separate online sources. An organization using a virtual catalog will configure the system to allow users access only to approved products from established suppliers, while allowing the purchasing professionals to access any Internet-based cata640

E-Procurement: Business and Technical Issues log. Legacy systems and workflow linkages are established only once. The virtual catalog eliminates the effort and expense associated with catalog aggregation, offers the flexibility of allowing access to product information from a diverse array of sources and data formats, and provides only the most current product-centric information directly from its source. Auctions E-procurement technologies come with the ability to hold auctions. Both forward and reverse auctions are feasible. In a forward auction, sellers post the goods or services they want to sell, and buyers then bid for the services or goods. Excess capital equipment or inventory is typically sold via forward auctions. In a reverse auction, buyers post a request for quotes for items they want to buy, and allow sellers to bid. In general, when purchasing agents want to buy direct material through auction techniques, they use a reverse auction. The agents prepare by doing basic sourcing and narrowing down their sources to four or five suppliers. The agents then request quotes for auction from these five suppliers. There are also exchanges where auctions can take place that are neutral to both buyers and sellers. The software is pretty flexible, allowing different types of auction rules (Yankee, Dutch, etc.), in terms of open or closed bids, what details can be seen, reserve prices, etc., which are all specifiable by the organization. Marketplaces Organizations find that it is costly to maintain E-catalogs inside their organization, and any transactional costs that may be saved are wiped out. Marketplaces (also known as net markets) have emerged to meet this need. These organizations specialize in aggregating the data of different suppliers and provide overlaying filters so that customized views of prices, terms, and so on are obtained for each purchasing organization.4 However, even these specialized companies have problems of their own. There are so many of these marketplaces — each with its own format needs — that suppliers have trouble meeting individual marketplace requirements. Also, different companies call the same item by different names, so that comparisons are not easily made. This is complicated by the fact that different industries might use the same term with different meanings. Finally, these problems are compounded for direct materials, which might need blueprints and other detailed technical descriptions to go with the system. A number of intermediary firms are arising to meet these needs. Marketplaces may be independent trading exchanges, vertical markets, or portals. Primarily, these marketplaces allow collaboration and data sharing within or across industries. The nature of services can go beyond cataloging to transaction management. Sites such as Ariba.net fall under 641

LEVERAGING E-BUSINESS OPPORTUNITIES this category. They are attractive to both buy- and sell-side organizations for different reasons. On the buy side, they provide demand aggregation, enable quick and easy supplier comparisons, and allow activity reporting, strategic sourcing, and so on.4 On the sell side, they provide low-cost introduction to customers, better capacity management and efficient inventory, production management via demand aggregation, and analytics that help suppliers better position their product in the market. Marketplaces have also been started by industry consortia. For example, GM, Ford, and Chrysler have an initiative (Covisint) by which purchasing for these companies worth over $240 billion per year is expected to be consolidated at one site.7 Feldman describes a few requirements for marketplaces. Marketplaces manage the participants’ information and business process (both buy side and sell side), as well as the transaction. It is also necessary that marketplaces support security, liquidity, transparency, efficiency, and anonymity. At times, users may want to be anonymous, or have their details hidden, or ensure that their histories and trading activities are not revealed to other parties. This must be balanced with the need for reliable information, so every participant trusts them. Because marketplaces are basically services, the services may have to evolve. For example, services that are not common, such as yield management (a service that airlines use for assigning seats), might have to be supported for everyone. Marketplace exchanges such as freemarkets.com also support auctions. Most organizations are currently focusing on exchanges with fixed auction rules applied to simple goods and services. Integrated Frameworks Exhibit 2 shows an integrated framework, depicting how the various technologies described are interrelated and used by companies. On the buy side, a customer uses buyer software to search the companies’ internal catalogs (which contain an aggregated list from all suppliers that is a virtual catalog), or to search an intermediary marketplace site and place an order. The order will go through the marketplace only if the customer searches for the latter and places an order there. Otherwise, the order will be placed directly with the supplier. The supplier’s E-commerce server confirms the order and updates its own back-end systems. When the buyer software receives a confirmation from the supplier/marketplace, it updates its back-end systems. When the marketplace is used, it contains aggregated content from all suppliers that can be searched. It also provides customized views for specific buyers. Auctions are generally conducted using marketplace software. 642

Buy Side

Sell Side Marketplace (Net Markets)

Company A

Company B

Buyer

E-commerce Server

Search/ Purchase Catalog

Internet ERP/ DBMS

Catalog Content Aggregation/ Customization

Company D

ERP/DBMS

Company C E-commerce Server

Buyer

Catalog ERP/ DBMS

Exhibit 2. Framework for Marketplace Services

ERP/DBMS

643

E-Procurement: Business and Technical Issues

Auctions

Internet

LEVERAGING E-BUSINESS OPPORTUNITIES CRITICAL SUCCESS FACTORS Significant planning is needed to achieve the savings promised by E-procurement. Enterprises must focus on certain key critical success factors. Define an E-Procurement Strategy For an organization to become an E-enterprise and compete in E-commerce, it is not sufficient for its information technology and business strategies to be aligned; they must merge. The E-procurement strategy must include a combined technology and business strategy. The vision and leadership must come from the purchasing department and identify the areas in which procurement technologies are most likely to benefit the company and provide for competitive advantage. The company must identify its core competencies and how procurement processes can support the core competencies. The technology strategy must be developed around supporting these core competencies. The technology group must also identify directions in which the technology is heading, such as back-end integration, and move the company to the technology forefront in support of the procurement processes. Reengineer the Procurement Processes The benefits of E-procurement technology will not be apparent if there is simply an automation of existing methods of working. To gain the benefits of reduced costs, better sourcing, and so on, it is essential that a reengineering of the procurement process be undertaken. As a result of implementing Internet-enabled procurement technologies, organizations have found that their supplier relationships are redefined, and that, in general, the number of suppliers is reduced (see Reference 3). Hence, supplier consolidation, etc. must be planned for prior to the implementation of the Eprocurement technology. Companies should gather input from stakeholders throughout the organization, since they are likely to be affected by the reengineering of the procurement process. Companies should also communicate to the stakeholders that this might be a difficult process for everyone. Involve Key Stakeholders Procurement affects every facet of the organization; therefore, key stakeholders from every affected department must be brought into the new system’s planning process. Management must confer with these various groups, taking their inputs into consideration as it carefully assesses those problems the company wants to address and the system’s goals. It is also essential to bring key stakeholders on board early in the process, involving them from the very beginning. If the stakeholders are not behind the effort, 644

E-Procurement: Business and Technical Issues users might not use the system, continuing to use existing legacy methods for procurement instead. Focus on Segments Currently there is no single vendor offering solutions in the entire procurement arena (from E-procurement to exchanges). Solutions from many vendors currently concentrate on MRO and indirect goods such as office supplies, IT equipment, and professional services. Many offerings do not provide support for direct goods, or for integrating suppliers across the supply chain. Enterprises might have multiple procurement strategies: one for direct material and one for MRO items. No single E-procurement tool from a vendor will meet an enterprise’s strategy correctly. It is necessary to segment and choose the vendor for each procurement strategy separately. For example, an organization must choose one vendor for MRO items and another to support direct goods. Identify Useful Measures Organizations should identify useful measures in terms such as cost per transaction for MRO items, cycle time from requisition to fulfillment, etc. They should use measures that can be measured and are useful in predicting the success or failure of the system. Manage Expectations Organizations should manage the expectations of the users and stakeholders by telling them the truth. The technology is still in a developmental stage and the functionalities may be incomplete. Everything that was possible with legacy systems might not be immediately feasible with the new environment. For example, in many instances changes to orders are harder to process with new technologies. In addition, the goods ordered with this technology might be limited to noncoded and MRO items and may be limited to a small percentage (as low as 10 percent) of MRO items in the initial pilot implementation. As acceptance of the technology within the organization increases, the percentage of MRO items available will also increase. OTHER BUSINESS IMPLEMENTATION ISSUES Cost/Benefits Organizations should be aware that the costs to implement the solutions are significant, and the benefits from the investment may not be apparent for a two- to three-year period — until the entire system is implemented. Many companies have a difficult time estimating the costs. Vendors are moving away from traditional licensing and maintenance fees as they try to establish market share and are moving to a transaction-based fee struc645

LEVERAGING E-BUSINESS OPPORTUNITIES ture. In addition, there are many less visible costs (that make up as much as five to ten times the cost of software) that are not included (for example, the cost of consultants, integration, catalogs and search engines, transaction costs, and user training). Similarly, the benefits are not easy to measure. Benefits come from three areas: compliance, leverage, and process efficiency. Impact on Suppliers The organization should assess the impact of the system on suppliers and their technological readiness to implement the system at their end, and should provide the services necessary for the system to succeed. For example, suppliers must be able to provide the catalog information for their products into any system that is designed. It is necessary to put together a supplier adoption team, train the suppliers, and get them ready concurrent with the organization’s implementation. E-Procurement Project Life Cycle Once the business issues have been sorted out and a decision has been made to implement E-procurement technology, a standard life cycle (Exhibit 3) must be followed. • Plan/Analyze. Gather the detailed requirements for the system. The input should be gathered from key stakeholders and users alike. Generate a request for proposals and perform a market search. Analyze various tools available on the market and research their suitability for the organization. Establish criteria for suitability and fitness of the product for your organization. Evaluate the tools against the criteria and have the main vendors or possible candidates demonstrate their product in your company. • Define/Design. Once you have picked a product, conduct a gap analysis to identify the gaps between requirements and the tool’s standard functionality. Have the vendor demonstrate all the key functionality that will be required, so that you know clearly what is not within the tool’s realm. The gaps identify the customization necessary to implement the product in your organization. Prioritize the gaps and develop a cost estimate for the customizations, so that you can decide on which customizations will be taken up first. Develop functional specifications on the customizations necessary for the tool that has been chosen. • Develop/Construct. The technical architecture of the procurement system must be designed and set up. Workflow rules that are necessary for authorization, procurement rules, etc. must be enunciated for the product to be configured. Programs must be written to add or modify the functionality that exists within the standard tool that has been 646

E-Procurement: Business and Technical Issues

Exhibit 3. E-Procurement Life Cycle

chosen. Each of the programs must undergo unit and system tests, following which user acceptance testing on the product is conducted. • Implement/Deployment. Once the customizations have been developed and tested, the system is ready for pilot deployment. Train the users on the usage of the system, and test the pilot implementation. Modify the system based on the pilot, test it, and then deploy the system at all locations. Example: E-Procurement for MRO Items It is first necessary to have a clear understanding of the business process that needs to be automated and the exact areas that are being automated. This should be laid out clearly. For example, the federal government in a pilot for electronic catalogs8 developed a diagram (Exhibit 4) identifying the procurement processes. This six-step process identified and had a goal of automating all portions of the process, from accessing and searching the catalog to evaluating the result, to placing the order, receiving the items, 647

LEVERAGING E-BUSINESS OPPORTUNITIES

Exhibit 4. Federal Government Procurement Process

and processing the payment. A clear depiction enables everyone to understand what business processes need to be accomplished by the system. From this, the requirements and specifications for the system can be identified. Not only must the system requirements in terms of functionality be specified, but the integration that is required with back-end systems must be specified. How is the order integrated with existing ERP or database systems? How is the processing of payment integrated with the existing ERP system? What is the integration with the applications running on these ERP systems, etc.? All these must be specified. Once the specifications and integration requirements are written out, a request for proposals is sent. The proposals that are returned must be evaluated based on the following criteria: • Functionality. Does the proposed system have the functionality that is required? For example, in the above case, the system must be able to aggregate and integrate the catalogs of multiple suppliers. Does it support payment processing via procurement or credit cards, etc.? These factors must be evaluated. An evaluation of the functionality must be made from different buy perspectives: buying of an MRO item to buying of services; buying with a procurement card to buying with a credit card, etc. • Technical architecture. What is the architecture of the product? Is it a solution from a single vendor or is it built from components of different vendors? Can the system be extended, and is it flexible? How intuitive is the user interface, and how easy is it to use? Is the performance of the system adequate? Will the system scale from a few transactions to a large number of transactions? • Cost. What is the total cost of ownership of the product. It must be recognized that the acquisition cost is small compared to the overall cost of implementing many of these systems. • Service and support. An evaluation must be made of the support provided by the vendor. Does the company and the solution offered have long-term viability? Will the product be enhanced as new standards and developments come in the area? 648

E-Procurement: Business and Technical Issues Once a system is picked, it is implemented following the E-procurement life cycle discussed in the previous section. The rules for procurement for different users and items are developed. Any workflow that is necessary is identified, and the system is configured with the business rules of the organization. Customization requirements are identified, prioritized, and the system customized for your needs and tested thoroughly. The users are trained on the system and the system is deployed. The analytic tools available with the system are used to measure the benefits of using the system versus the traditional way of procurement. A comparison of the actual benefits versus estimated benefits should be made and surveys of various stakeholders done to ensure that the system is a success. TECHNICAL ISSUES Experience by companies implementing E-procurement suggests that it is not without problems. Current E-procurement products have less functionality than traditional purchasing products or purchasing modules of ERP systems. Some users are turned off by this and are hesitant to use the Eprocurement products. This is because E-procurement products are still in the early stage of their evolution but the functionality will continue to improve. Current E-procurement products mainly support noncoded/nonstock MRO materials only. Although stock MRO material purchasing is possible through extensive back-end integration, most companies are sticking to noncoded/nonstock items such as supplies (paper, pencil, furniture, etc.). The main issues are integration back to the ERP, the inventory management system, and the lack of access to information stored in item master files or database systems for coded items. For companies that have visual inventory management systems such as KANBAN, current E-procurement products might be a better fit. Direct material procurement requires a lot more back-end integration than indirect materials and is very much more complex. There are some direct material products available, but companies are hesitant to try them, as the functionality does not cover all the requirements. In addition, the standards are continuing to evolve. Unlike indirect, direct material process has linkages to planning, engineering, and sales systems, making it problematic. Catalog content development remains a major problem. Several suppliers think they need to invest a lot of time and money to convert the information from their internal format to meet different customer and marketplace catalog formatting requirements. Suppliers do not believe they reap the benefits for their investments and view the benefits as being primarily 649

LEVERAGING E-BUSINESS OPPORTUNITIES on the customer or buy side for which they have to bear the costs. Hence, suppliers are reluctant to satisfy all except their biggest customers. CONCLUSION Companies both big and small can now reap the benefits of E-procurement technologies by automating the purchasing operations such as catalog search, supplier selection, and purchase order processing. These activities can be done by end users, while ensuring that corporate purchasing policies are being enforced. Internet-based procurement technologies are fundamentally changing the way purchasing buys both its MRO goods and direct goods. Automated exchanging of data between suppliers and buyers is accomplished with these technologies, resulting in tighter relationships between suppliers and buyers. Fewer errors and higher data quality are filled into back-end ERP systems as procurement technologies get integrated with ERP and other back-end systems. References 1. Carter, P.L., Carter J.R., Monczka, R.M., Slaight, T.H., and Swan, A.J. "The Future of Purchasing and Supply: A Ten-Year Forecast," Journal of Supply Chain Management, Winter 2000, pp. 14-26. 2. Brack, K. "E-Procurement: The Next Frontier," Industrial Distribution, January 2000, 89(1), pp. 65–67. 3. Croom, S. R. "The Impact of Web-Based Procurement on the Management of Operating Resources Supply," The Journal of Supply Chain Management, Winter 2000, pp. 4–13. 4. Porter, A.M., "A Purchasing Manager’s Guide to the E-Procurement Galaxy," Purchasing; September 21, 2000, pp. S72–S88; available: http://www.manufacturing.net/magazine/purchasing/archives/2000/pur0921.00/092guide.htm 5. Avery, S. "Online Buy Systems Help Clean up Business Processes," Purchasing, October 21, 1999, pp. S38–40. 6. Baron, J.P., Shaw, M.J., and Bailey, A.D. "Web-Based E-Catalog Systems in B2B Procurement," Communications of the ACM, 43(5), pp. 93–100. 7. Feldman, S. “Electronic Marketplaces,” IEEE Internet, July–August 2000, pp. 93–95. 8. EPIC Buying and Paying Task Force: Commerce One Catalog Interoperability Pilot Phase2 Report, May 2000.

650

Chapter 52

Evaluating the Options for Business-to-Business E-Commerce C. Ranganathan

It would not be an exaggeration to state that the Internet represents the most significant change in the corporate world since the invention of the telephone. Over the past few years, the Internet has been a market maker, a market destroyer, an industry change-agent, and even an inverter of traditional ways of conducting business. The Internet and Web technologies have presented established firms with both opportunities as well as threats. The use of Web technologies in inter-organizational business transactions and in inter-firm relationships has caught the attention of executives and industry experts. This phenomenon is popularly known as business-to-business (B2B) E-commerce. The business press has disseminated varying statistics on the potential growth of B2B E-commerce. According to the U.S Census Bureau, B2B transactions in the manufacturing sector alone were worth over $777 billion in year 2000. In 2001, the year 2004 projections by different industry sources had a range from $963 billion to between $3 and 6 trillion (ActivMedia Research, IDC, Goldman Sachs & Co., Forrester).1 While the estimates predate the B2B shakedown, the general consensus is that B2B Ecommerce has a significant growth potential in the near future. Several buzzwords such as E-hubs, Internet exchanges, E-markets, Eprocurement, and E-exchanges have been coined by industry to refer to different models of B2B E-commerce. Pundits and experts have been writing numerous articles and books on what potential benefits can be derived 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

651

LEVERAGING E-BUSINESS OPPORTUNITIES (0DUNHWSODFHV 0DQ\WR0DQ\5HODWLRQVKLSV

3ULYDWH([FKDQJHV 2QHWR0DQ\5HODWLRQVKLSV

Exhibit 1. Business-to-Business Relationships

from B2B E-commerce. Buzzwords and predictions aside, the simple questions that are weighing on the minds of senior executives are the following: • How can an organization successfully exploit Internet technologies for improving inter-organizational relationships and business transactions? • What are the alternatives and reasons for pursuing the different choices? • What challenges need to be considered while exercising these options? These are the issues addressed in this chapter. INTERNET AND B2B RELATIONSHIPS To understand the B2B landscape, one needs to focus on two fundamental ways through which business relationships could be fostered using the Internet: many-to-many and one-to-many relationships (Exhibit 1). The Web provides a forum for many firms to come together in a common electronic platform to transact business, thus enabling many-to-many relationships. Further, the Web can enable a single firm to have one-to-many linkages with customers, suppliers, or both. Many-to-many relationships are facilitated by B2B E-marketplaces that bring together sizeable numbers of buyers and suppliers. B2B E-marketplaces (E-markets or E-exchanges) are electronic hubs where companies can trade goods and services and exchange information. B2B began as oneto-one EDI linkages and has since migrated to a variety of many-to-many models that enable myriad new capabilities. Another approach for B2B commerce is in the form of private exchanges that enable one-to-many relationships. E-MARKETPLACES As the Internet became a common business medium in the mid-1990s, several firms started developing Web-based E-marketplaces. An E-marketplace is a Web site where goods and services can be bought from a wide range of 652

Evaluating the Options for Business-to-Business E-Commerce suppliers. Most of these E-marketplaces initially started as vertical portals (vortals), focusing on services to a specific industry. The main idea behind the vortals was to bring together suppliers and buyers of a specific industry on a common electronic platform and to provide a variety of information as well as business transaction services to these firms. The important information services they provide include industry news, reports, trends, analyses, and in-depth reports of the companies in the industry. In addition to the information services, these E-marketplaces also enable companies to contact each other, exchange goods and services, and transact business. Another type of E-marketplace, called the horizontal marketplace, revolves around specific products or a group of products. These exchanges form around a supply market that cuts across multiple industries. The marketplaces for buying and selling MRO goods (materials, repair, and operations) such as hand tools, safety supplies, etc. that are used in production but are not a part of the final product (indirect materials), are examples of horizontal E-marketplace. The market for such product groups is fragmented, so horizontal marketplaces provide a value-adding forum for buyers and sellers to match respective needs and conduct business. Yet another type of E-marketplace is the function-oriented E-markets that concentrate on a particular business function and its business processes. For example, empolyease.com helps HR departments manage employee administration and benefits. Similarly, Tradeout.com brings together firms that want to sell their excess inventory to buyers. Such marketplaces help in locating suppliers and buyers of specific functions, and also considerably reduce the overhead costs and efforts involved in performing these functions. As E-marketplaces evolved, several variations of the many-to-many and one-to-many models emerged.2 A more robust classification of the E-markets is based on the ownership of the exchanges and the governance structure underlying them. Based on these, E-marketplaces could also be classified into three categories: independent, consortia, and private exchanges. The characteristics and the operational dynamics of these three types are significantly different as to warrant special attention. The following sections discuss and analyze each of these categories. Independent Exchanges Most of the E-marketplaces that were formed in the heydays of E-commerce were pure Internet start-ups developed by independent players. These firms hoped to cash in on the high dot.com valuations in the late 1990s. Examples of independent E-marketplaces include Chemdex, e-steel, etc. While a few independent E-marketplaces such as AutoTradeCenter.com went public, a 653

LEVERAGING E-BUSINESS OPPORTUNITIES vast majority of these independent firms remained private, dependent largely on venture capital funding to build their businesses. With increased competition, decreased venture capital funding, and hostile stock markets, most of the privately held, independent E-marketplaces were under intense financial pressures by 2001. Apart from financial troubles, there are others reasons why many independent exchanges failed. From a fundamental economics perspective, most of these exchanges operated in a domain with low entry barriers. As a result, many firms entered the fray and the competition intensified. For example, in the chemical industry there were more than 25 E-marketplaces operating in the year 2000, including Chemdex, ChemicalDesk, ChemB2B.com, and eChemicals. Also, the management teams of these firms often had technology executives who lacked business skills and industry-specific knowledge. The lack of adequate financial resources and business expertise led to the downfall of several independent exchanges. Some of the independent exchanges that had built a sound technical infrastructure became service providers, repositioning themselves as software and service companies, assisting other firms in their B2B efforts.3 For an independent exchange to survive over the long term, it needs to have a form of hard-to-imitate and value-added services that give it an edge over the competition. An example of a value-added service provided by an exchange is BuildNet, which not only provides facilities for trading products, but also provides value-added services in the form of specialized solutions for materials planning, job-lot scheduling, etc. Independent exchanges that operate as functional or horizontal niches might find it easier to compete. Consortia-Based Exchanges Consortia-based exchanges are joint ventures involving different firms, with an overall goal of improving their performance and that of the industry as a whole. In this model, the various industry players, including competitors, combine forces to create a common forum for doing business-tobusiness transactions. One of the earliest of these was Covisint, created by DaimlerChrysler, Ford, and General Motors. Another popular consortia marketplace is Transora, formed by consumer-products companies such as Unilever, Proctor & Gamble, the Coca-Cola company, as well as the grocery manufacturers. Avendra is an E-marketplace established by Marriott, Hyatt, and three other major hotel chains. Exostar is an exchange in the aerospace sector formed by companies such as Boeing, Raytheon, Lockheed Martin, and BAE Systems. The fundamental idea of the consortia exchanges is to exploit the size, deep industry knowledge, and sophisticated business practices that the 654

Evaluating the Options for Business-to-Business E-Commerce founders bring to the exchange. As opposed to independent marketplaces, the consortia exchanges also have an inherent advantage in the form of high liquidity from their own founders. A major hurdle facing consortia exchanges is antitrust and other regulatory issues. By their very business model, a consortia exchange brings competitors together and, thus has the potential to diminish competition. Several consortia exchanges often require agreements that ask for sharing of product, price, quantity, and other supply-chain data. This has raised concerns about unfair and anti-competitive trade practices. Depending on the vested interests of the key founders, the consortia exchanges might treat other industry players unfairly or even engage in activities such as price-fixing, exclusion from participation in exchange, etc. Regulatory bodies in the United States and Europe are scrutinizing such complaints. Several consortia exchanges have not lived up to their initial expectations for other related reasons. The sheer founding members’ size and long-term business practices, let alone ownership interests, have resulted in some cumbersome and slow decision-making processes under joint governance structures. This is, of course, in conflict with the speed and business agility that have been associated with E-commerce developments emerging from Silicon Valley in the late 1990s. Another problem relates to building a sustainable supplier base. Many consortia exchanges failed to come up with attractive value propositions for suppliers. Some exchanges require the suppliers to conduct all sales through the exchange, and also require them to pay transaction fees. Although this provides cost and process advantages to the buyers, suppliers are forced to bite narrow margins. In some cases, suppliers have created their own upstream consortia exchanges in response. For a consortia exchange to survive and succeed, it needs to have strong commitment and collaboration among its members. To promote stronger commitment and collaboration, several exchanges have issued equity warrants to members when their transaction volumes meet set targets. Other tactics include confidence-building measures such as joint procurements, sharing of emergency inventories, joint coordination of logistics, etc. To overcome supplier resistance and develop a broader supplier base, several consortia exchanges have included some primary suppliers as founding members. In these cases, suppliers, distributors, and other business partners come together to found an exchange, thus paving the way for closer collaboration. In other cases, suppliers play roles as “development partners,” in which they have an active role in determining the functionalities of an exchange and also the product offerings. For example, Covisint included some of its key suppliers as development partners in the exchange.4 655

LEVERAGING E-BUSINESS OPPORTUNITIES Private Exchanges Private exchanges are single-firm operated, Web-based hubs that connect a firm to its business customers, suppliers, or both. Unlike independent exchanges that are managed by third parties, and consortia exchanges that are owned by a group of firms, private exchanges place complete control of the B2B exchange in the hands of the company running it. An important feature of the private exchange is that it is an invitation-only network. The company establishing a private exchange can choose the partners it wants to participate in the exchange. Therefore, these exchanges are not marketplaces in the real sense. They do not help locate new customers or suppliers; however, they provide a cost-effective way to improve and enhance the linkages and processes established with suppliers and customers. The ancestor to private exchanges is EDI (electronic data interchange) — the application that many companies have traditionally used to exchange and share documents and other information via a telecommunications network. Private exchanges offer all the advantages of traditional EDI, with increased capabilities based on the Internet and Web technologies. Companies such as Motorola, Dell, Cisco, and Wal-Mart have established private exchanges to establish closer relationships with their business partners and achieve considerable process efficiencies and cost savings. Private exchanges can be further classified into buyer based or seller based. Buyer-based private exchanges connect a buyer firm to its suppliers, thereby providing effective and efficient supply-chain operations. They typically facilitate common tasks such as online ordering, invoicing, and shipment delivery confirmation. However, they could also be used for forging stronger collaborations with suppliers in the form of collaborative planning and replenishment, forecasting, joint product design, etc. Through its private exchange, Wal-Mart allows its suppliers access to the history of customer transaction data, and suppliers use this data to analyze the sales trends, plan their production, and manage their inventories accordingly. Participation in such buyer-launched exchanges also enables suppliers to quickly respond to customer demands, manage their processes more efficiently, and gain privileged access to the buyers’ systems. In seller-based exchanges, a firm establishes B2B linkages with its key customers using a Web site. These exchanges provide facilities for managing customer orders, product specifications, customer support, and such activities. For example, Cisco’s private exchange allows customers to configure, place, and check their orders on the exchange. In some privateseller exchanges, the seller can even examine a customer’s inventory and

656

Evaluating the Options for Business-to-Business E-Commerce replenish it automatically. These exchanges could also be designed to facilitate joint product design, forecasting, and collaborative planning. In 2001, Motorola implemented a B2B private exchange through which its dealers and customers could log on and obtain product-related information and also fully manage their accounts. Traditionally, the company relied on a network of dealers to manage both the sales and service of their product lines. However, the company also had other channels and a direct sales force through which certain groups of customers were serviced. To manage multiple types of customers, configurable products, and a vast array of product lines, the company relied on a system of call centers and bulky print catalogs and product literature. This was complex and cumbersome. With the implementation of a private B2B exchange, Motorola now provides a common platform for its large corporate customers and dealers to transact business with the firm. Apart from obtaining product information, service support, as well as managing the orders, customers also have the authority to control who can place orders, who can get access to their internal account information, etc. This has provided a lot of flexibility to the customers, cut down the order processing times, reduced the delivery times, and improved customer satisfaction. A major advantage of a private exchange lies in its ability to support a firm’s unique strategy and organizational needs. Unlike the independent and the consortia models, where a company would have less freedom to align its B2B activities with specific organizational requirements, a private exchange offers the flexibility of tailoring an exchange to meet firm-specific requirements and strategic goals. Private exchanges are more appropriate for companies that enjoy a dominant position in their industries and possess superior supply-chain management capabilities. Such companies may not opt for a third-party managed independent or consortia exchange in order not to share their knowledge and expertise. For example, Dell enjoys a formidable position in the computer industry due to its build-to-order supply-chain capabilities, and has opted to have its own private exchange; it prefers to keep its proprietary supply-chain practices secret. ASSESSING THE OPTIONS FOR B2B E-COMMERCE A company examining B2B E-commerce has a number of options. As previously discussed, it can launch its own exchange or join an E-marketplace established by independent entities, or perhaps join an industry consortium as a founder or as a participant. A mapping of the different approaches and B2B exchange options available to a firm is shown in Exhibit 2. 657

LEVERAGING E-BUSINESS OPPORTUNITIES

%% /LQNDJHV

2QHWR0DQ\

3ULYDWH ([FKDQJHV

%X\HU%DVHG ([FKDQJHV

6HOOHU%DVHG ([FKDQJHV

0DQ\WR0DQ\

,QGHSHQGHQW ([FKDQJHV

&RQVRUWLD ([FKDQJHV

+RUL]RQWDO ([FKDQJHV

9HUWLFDO ([FKDQJHV

Exhibit 2. Mapping of B2B Exchange Options

The overall objective of engaging in B2B E-commerce is to improve profitability via better inter-firm relationships, supply-chain planning and collaboration, product pricing, logistics and distribution management, and procurement efficiencies. For a company to realize its goals from B2B Ecommerce, a single model or a single B2B application may not be sufficient. Instead, the company needs to have a portfolio of E-business applications that are aligned with its overall business strategy. For example, one of the earliest B2B E-commerce initiatives by Dow Chemical was to set up a private exchange: MyAccount@Dow. This exchange was launched in Latin America in 1999 after pilot testing with over 200 customers. In 2002, it grew to over 8000 users in 35 countries, capturing 40 percent of the total sales volume in Latin American countries. In addition to this private exchange, Dow Chemical also involved itself in close to ten other B2B initiatives. Dow Chemical has also participated in consortia exchanges, including Omnexus to sell plastics and Elemica to sell other chemical products, and has equity stakes in ChemConnect, an independent exchange for locating new suppliers and for auctioning some direct materials. Thus, the company has a portfolio of B2B projects, each aimed at bringing distinct capabilities and advantages to the organization. When considering appropriate options for B2B initiatives, an executive is often faced with decisions about whether or not to join an exchange, to launch the company’s own exchange, or to just stay on the sidelines to wait and watch. The key decision may not be about participating in exchanges per se, but rather about which products or business units should participate in which markets and at what level. The nature of the company’s products, the raw materials that go into producing the products, and the complexities in the firm’s supply chain should primarily drive the B2B decisions. 658

Evaluating the Options for Business-to-Business E-Commerce Given the success of Dell, Cisco, and Wal-Mart’s private exchanges, launching a private exchange might sound like a superior solution, but it also has its own problems.5 The costs of setting up a private exchange can be much higher than the costs involved in forming an industry consortia or participating in other independent exchanges. Apart from the up-front capital costs of setting up an exchange, the annual operating costs are also likely quite high. And a private exchange is bound to fail if the firm does not exert significant influence in its supply chain to bring its suppliers or business partners to the exchange. Therefore, the power that the firm wields in its supply chain and the level of investment required should be important considerations. If a firm has a longer product cycle, and if the number of suppliers or customers is rather small, it may not be beneficial to launch a private exchange. Other investment alternatives could deliver similar or better results. There are other important issues that need to considered by firms assessing their B2B options. The extent of maturity in the industry is one such issue. For example, one of the key concerns in B2B E-markets relates to the standardization of codes used for product-related data. Less-mature industries may not have standards for products, processes, and for facilitating information-exchange. In fact, the standards for products and productrelated data in many developed, mature industries may also be fragmented. Because B2B systems require that multiple firms follow standard codes for describing products, it is important to integrate disparate data formats and codes. Only such an exercise will pave the way for the seamless integration of Web-based systems across supplier and buyer organizations. An important technical concern in B2B E-commerce is the interoperability across the participants’ internal systems, new B2B systems, and the applications used by the other business partners. It is important to integrate the B2B systems with the current IT architectures of multiple organizations involved in the B2B initiative. This is a monumental systems integration exercise, as it involves developing integrated solutions for proprietary mainframe-based applications, multiple platforms, and databases across multiple companies. A summary of some of the most important considerations in assessing B2B electronic commerce options is presented in Exhibit 3. This can serve as a preliminary checklist for firms to assess their own positions and their supply-chain environment, and to determine appropriate courses of action. CONCLUSION B2B E-commerce offers cost-effective ways to manage inter-firm relationships and conduct business transactions. In an era of extended enterprises 659

LEVERAGING E-BUSINESS OPPORTUNITIES Exhibit 3.

A Checklist for Assessing B2B E-Commerce Options

Key Decisions: • What are your company’s goals for the B2B E-commerce marketplace? • What is the volume and size of your buyer–supplier transactions? • How unique and complex are the buyer–supplier interaction processes in the firm? • What popular exchanges exist in your industry? Do consortia options exist? • Are your key buyers and suppliers already participating in a marketplace? Would participation in that marketplace ease your existing transactions with other companies? • To what extent does your company want to use B2B marketplaces for sourcing? Is a full E-sourcing solution desired, or a basic turnkey solution? • Would it be viable to launch and maintain a private exchange? Infrastructure and Viability: • What kind of mechanisms are used for selling, be it catalog, reverse auction, forward auction, etc.? • Is the marketplace based on proprietary technologies or a major infrastructure vendor? • What value-added services are provided (e.g., payment, logistics, etc.)? • How does the marketplace make money? Will this business model afford them enough of a profit to be viable into the future? • What training, customer service, and support features exist? • Is the marketplace backed by financially solid founders or consortium members? Buyers and Sellers: • Is this an independent (open) exchange or a consortia-driven marketplace? • Are buyers and sellers screened for liquidity? • Is the marketplace buyer-focused, seller-focused, or neutral? • Is there a critical mass of buyers and sellers? • Are these businesses with which you have been involved before? Internal Position: • Which products or services are best suited for B2B E-commerce (direct materials, indirect materials, finished goods)? • Does the firm have adequate financial resources to launch or participate in an existing exchange? • How much power does the firm have vis-a-vis other partners in its supply chain? • Are the internal IT systems in the firm and the Web systems interoperable? • Is there adequate IT expertise in the firm to carry out the B2B initiatives?

where the business success of a firm largely depends on its suppliers, customers, and other business partners, it is important to recognize the potential of B2B E-commerce and take appropriate action. Remaining on the sidelines, adopting a wait-and-watch approach, could prove to be a costly mistake. Based on the internal organizational context, the industry conditions, and the complexities involved in their supply chains, firms need to be making strategic investments in B2B E-commerce solutions. 660

Evaluating the Options for Business-to-Business E-Commerce Notes 1. For different projections on B2B growth, see: (a) Direct Marketing, “B2B E-Commerce Revenues to Reach 2.7 trillion by 2004,” March, 2001, (b) Direct Marketing, “Worldwide B2B Internet Commerce to Reach $8.5 trillion by 2005,” June 2001, (c) Hellweg, E., “B2B Is Back from the Dead,” Business 2.0, April 5, 2002, (d) “Growth Spurt Forecast for B2B,” Financial Executive, May 2001. 2. For a discussion on emergent B2B models, see Keerigan, R., Roegner, E.V., Swinford, D.D., and Zawada, C.C., “B2Basics,” McKinsey Quarterly, 2001, No. 1, pp. 45–53. For an interesting discussion on categories and typologies of E-marketplaces, see Kaplan, S. and Sawhney, M., “E-Hubs: The New B2B Marketplaces,” Harvard Business Review, May–June 2000, pp. 97–103. 3. See Sawhney, M., “Putting the Horse First,” CIO Magazine, May 15, 2002 (available at www.cio.com). 4. For an interesting discussion on consortia exchanges, see Devine, D.A., Dugan, C.B., Semaca, N.D. and Speicher, K.J., “Building Enduring Consortia,” McKinsey Quarterly, 2001, No.2, pp. 26–33. 5. For a good discussion on setting up private exchanges, see Hoffman, W., Keedy, J., and Roberts, K., “The Unexpected Return of B2B,” McKinsey Quarterly, 2002, No.3, pp. 96–105.

661

This page intentionally left blank

Chapter 53

The Role of Corporate Intranets Diana Jovin

Intranets play a key role in reducing costs and increasing the effectiveness and efficiency of internal information management. Intranet applications serve as productivity, sales, service, and training tools that can be disseminated throughout the organization at much lower cost than traditional paper, client/server, or mainframe implementations. In addition, intranets enhance the capabilities of traditional applications by extending portions of the application to a wider audience within the organization. THE INTRANET IMPACT: WHAT CAN AN INTRANET DO? Applications made available on an intranet tend to fall into one of two categories — Web self-service and intranet reengineering. Web self-service applications make the process of information delivery more efficient by eliminating cost and redundancy from the information delivery cycle. Intranet reengineering applications, through the use of real-time information delivery, change existing business processes. These applications enable companies to offer new products and services and increase the effectiveness of the business decision-making cycle. Web Self-Service Web self-service applications allow users to access information more efficiently by eliminating an intermediary process or middle-man, whose sole function is facilitation of information access. These applications make information more readily available, accurate, and reliable. Examples include • Employee directories. Directories provide basic personnel information, including phone numbers, extension, addresses, and job descriptions, that allow employees to update information such as address

0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

663

LEVERAGING E-BUSINESS OPPORTUNITIES changes themselves, without going through the process of filling out an information change request form. • Human resources benefits. Human resources applications allow employees to review their status on vacation and medical benefits, look up current status of 401(k) contributions, and change allocation of contributions to 401(k) funds. • Technical support. Technical support applications enable employees and business partners to look up answers to technical issues directly from a technical support database and extend service capabilities beyond working hours. Intranet Reengineering Intranet reengineering applications not only provide real-time information delivery, but they also impact existing business processes and how decision making feeds into them. The following sub-sections describe sample applications. Sales Force Automation. Web-based sales-force systems provide the sales staff with immediate access to customer account status and activity. Whether in retail banking, brokerage, or other industries, viewing real-time status is a tool that the sales force can use to provide new products and better service. In some financial institutions, portfolio applications that make customer information immediately accessible to the sales force are replacing the practice of distributing customer account information in the form of monthly, paper-based reports. Manufacturing and Inventory. Inventory systems that interface between manufacturer and distributors can significantly improve processes such as inventory location and price protection. An example application that manufacturers are providing to distributors is inventory tracking, which provides information on availability, price, and location. In industries with price volatility, Web applications allow manufacturers to respond more quickly to price protection issues by enabling distributors to enter sales and order information that is processed immediately rather than in batch mode. Purchasing and Financial. Purchasing applications let employees submit purchasing requests directly from the Web. International companies can benefit from applications that provide the purchasing department with information on foreign exchange exposure and recommended cash position prior to purchase.

In these examples, real-time delivery of information can have a significant impact on a company's product and service offerings or its ability to respond more quickly to the customer. In some industries, the Web is rede664

The Role of Corporate Intranets fining the competitive landscape. For example, banks, which have been losing share in back-office activities to software vendors such as Intuit, are using the Web to reclaim their market share with applications that allow customers to enter request-for-quote or payment initiation directly over the Web with an easy-to-use interface. The Web as an Application Platform The Web is compelling as an application platform because it provides both strategic and tactical benefits. Companies can harness the Web as a way to attract new customers and deliver new products and services. At the same time, companies can significantly reduce the costs of technology and doing business. These benefits combined make the Web an attractive platform over alternative implementations such as client/server or mainframe. Benefits include: • Global availability. Web applications can be made available on a global basis, providing companies with a mechanism to go after a new set of customers or to integrate remote offices or business partners without building expensive, proprietary networks. • Instant application distribution. Applications can be deployed instantaneously worldwide, eliminating the need for installation of client-side software or for printing, reproduction, and distribution of paperbased information. • Platform and version independence. Applications are server based and can interact with any Web browser on any Internet-capable client. Applications are no longer tied to the client hardware platform and can easily be distributed across heterogeneous computing environments. Applications can be updated instantaneously, eliminating the hassle of version maintenance and support. • Reduced training costs. Web applications have a common look and feel, which lowers training costs of applications traditionally presented in different types of GUI environments. • Increased data reliability. Web applications can eliminate redundant data entry from paper forms. Reliability and availability of data is increased when the information holder can enter and update information directly. With benefits that contribute to both increased revenue and decreased cost, the potential impact on a company's bottom line can be huge. NEW MODEL FOR DISTRIBUTED COMPUTING The Web's benefits derive from its architecture. A Web application is not merely a client/server application with a Web browser interface. “Web-

665

LEVERAGING E-BUSINESS OPPORTUNITIES native” applications take full advantage of this architecture. “Web-enabled” applications typically miss the full set of benefits because they are tied to an existing client/server-based architecture. Four key areas in which the Web architectural model differs significantly from that of client/server include: network infrastructure, client-side requirements, server-side requirements, and management of database log-in. WAN versus LAN Web applications are deployed over a wide area network (WAN), in contrast to client/server applications, which are deployed over proprietary local area networks (LANs). There are two immediate implications in this difference: reach and cost. In the WAN environment, companies can communicate with anyone connected to the WAN, including customers and business partners worldwide. LANs typically have a smaller reach and are also often expensive to install and maintain. WAN applications provide a means for a company to communicate with business partners or employees worldwide without building a global private network as long as security considerations are sufficiently taken into account. Application Publishing: Server versus Client Web applications, in contrast to client/server applications, are primarily server-based, with a “thin client” front end. This thin client may do some business logic processing, such as data validation, but the bulk of the business logic is processed on the server side rather than on the client. Client/server applications, in contrast, typically support “fat clients,” in which the application is a sizeable executable running on the client. Although this model takes advantage of client CPU power for application processing, the client/server model does not provide the Web's primary benefit — instant application distribution. Web tools that provide clientside plug-ins typically call themselves “Web-enabled” as opposed to “Webnative” because they are not taking full advantage of the Web's architecture in instant distribution. N-Tier versus Two- or Three-Tier Web applications require a multi-tier, or n-tier, server architecture. Scalability takes a quantum leap on the Web, with a much larger application audience and greater uncertainty in the number of users who might choose to access the application at any given time. Client/server applications hit the wall with a two-tier architecture. To solve this problem many client/server implementations have moved to a 3666

The Role of Corporate Intranets tier architecture. Given the greater number of users who can access a Web application, even three-tier models are not enough to sustain some of the heavy-duty applications being deployed today. In addition, the Web provides the capability to move intranet applications, such as customer portfolio management, directly to the customer over the Internet. These applications can only migrate to the Internet environment if they have been designed to scale. Shared Database Connection versus Individual Log-in Web applications incur heavy CPU processing requirements as a result of the number of users accessing the application. As a result, well-designed systems provide users with persistent shared database connections. In this model, the user only ties up a database connection when he or she has pressed an action button, hyperlink, or image that requests data from the database. Once the data is returned, the database connection is free for another user, without requiring the database connection to shut down and reopen for the new user. In the client/server model, the user maintains an individual persistent database connection from the time he or she logs on to the time the application is exited. In this model, the database connection is inefficient because the user is logged on to the database regardless of whether a database action is taking place or the user is merely looking through the results that have been returned. TECHNICAL CONSIDERATIONS Although a Web architecture delivers significant benefits, it also introduces new technical challenges, particularly with respect to scalability, state and session, and security. When developing applications and selecting development tools, it is critical to understand these challenges and how they are being solved by different vendors in the industry today. Scalability and Performance Web-native applications (i.e., applications that are server-based rather than client-side browser plug-ins) that provide the highest degree of scalability are deployed through an n-tier application server. Application servers have become the standard model for delivering scalable applications. In the early stages of Web development, applications were executed through CGI. In this model, the Web server receives a request for data, opens a CGI process, executes the application code, opens the database, receives the data, closes the database, then closes the CGI process, and

667

LEVERAGING E-BUSINESS OPPORTUNITIES returns the dynamic page. This sequence takes place for each user request and ties up CPU time in system housekeeping because of the starting and stopping of processes. System housekeeping involved in executing the application increases in porportion to the size of the application executable. Application servers, in contrast, stay resident as an interface between the Web server and database server. In this model, the Web server passes the request to the application server through a very small CGI relay or the Web server APIs. The application server manages the application processing and maintains persistent connections to the database. Enterprise-level application servers multiplex users across persistent database connections, can be distributed across multiple CPUs, and provide automatic load balancing and monitoring. State and Session Management The Web is a stateless environment, meaning that information about the user and the user's actions are not automatically maintained as the user moves from page to page in the application. This presents obstacles to providing LAN-like interaction in the Web environment. Some technology vendors have solved this problem by building session and state managers into the application server, which allows developers to build applications with persistent memory across pages. An early approach that also persists is to write “cookies” or files containing state information to the client browser. These files are read on each page access. This is a manual process that is less secure than server-based session and state memory. Security Security is a key to implementing business critical applications in the Web environment. The good news is that it is becoming easier to manage security on the Web. In building a secure environment, it is important to understand first the intranet or intranet application's security requirements, and second the technology component of the intranet solution that is going to provide it. Exhibit 1 shows some of the components that might exist in an intranet environment and how they might contribute to different aspects of a secure solution. CONCLUSION In addition to sound tools and technology, a successful intranet also requires a solid operational plan. These plans differ significantly from com668

The Role of Corporate Intranets Exhibit 1. Some Components of an Intranet Environment and Their Contributions to Security Technology Component

Contribution to Security

Web server

User authorization and data encryption

Application server

Page navigation flow control

Database server

Database log-in

Firewall

Internal network access control

DCE infrastructure

Centralized security log-in and rules

pany to company, but issues that will need to be considered and addressed includes • Should the organization build in-house expertise or outsource intranet development? • Should purchases of tools and technology be centralized through one technology evaluation group, or dispersed throughout the company and individual business units? • How should the company address training and education of the intranet? • How can the company generate excitement and buy-in? One common theme across companies, however, is to start with some simple but effective applications, such as employee directories. Successful operations plans use these applications to gain interest and excitement, and intranet champions within the organization take it from there. The impact of intranets on corporate profits and productivity can be tremendous. The move to an intranet architecture requires rethinking some of the traditional assumptions of client/server architecture, but the benefits that can be reaped from the Web are enormous. Intranets are redefining the landscape of corporate America and can be a key to achieving or keeping competitive advantage.

669

This page intentionally left blank

Chapter 54

Integrating Web-Based Data into a Data Warehouse Zhenyu Huang Lei-da Chen Mark N. Frolick

Data warehousing technologies have become mature enough to efficiently store and process huge data sets, which has shifted the data warehousing challenge from increasing data processing capacity to enriching data resources in order to provide better decision-making assistance. There have been reports that some organizations intend to recruit Web data into data warehouse systems as a means of responding to the challenge of enriching data resources, because infinite information has made the Internet the largest external database to each organization. However, there is not a systematic guideline to support such an intention. To fill this void, we introduce Web integration as a strategy to merge data warehouses and the Web, with an emphasis on effectively and efficiently acquiring Web data into data warehouses. We also point out that the critical step for Web integration is to acquire genuinely valuable business data from the Web. A framework for determining the business value of Web data is offered to facilitate Web integration efforts. DATA WAREHOUSING The term “data warehouse” was first coined in the early 1990s by Bill Inmon, who defined a data warehouse as “a subject-oriented, integrated, time-variant, nonvolatile collection of data organized to support management needs” (Castelluccio, 1996). Since then, information systems (IS) practitioners and researchers have begun to pay closer attention to this new form of information system technology. Data warehousing technology

0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

671

LEVERAGING E-BUSINESS OPPORTUNITIES

'DWD 0DUW

8VHU 7RROV

,QWHUQDO 'DWD6RXUFHV 'DWD :DUHKRXVH

'DWD 0DUW

Exhibit 1.

([WHUQDO 'DWD6RXUFHV

Data Warehousing Environment

has evolved so rapidly that it is now one of the hottest topics in the IS world. Data warehousing assists organizational information processing by providing a solid platform of integrated data, both current and historical, from which organizations can conduct a series of business analyses. Many companies have achieved substantial benefits from their large amounts of capital and human resource investment in data warehouse systems. For example, an early study of 62 data warehousing projects showed an average return on investment (ROI) of 321 percent, with an average payback period of 2.73 years. The same study also predicted that users would spend nearly $200 billion on data warehouse technology over the subsequent five years. Data warehousing is a complex effort, which collects all related data from initially isolated (legacy) systems, cleans the data by correcting errors, and organizes the data according to an appropriate scheme (e.g., the star schema). Predominantly, a data warehouse stores internal data. According to Inmon (1996), one of the salient characteristics of data warehousing is that “in almost every case, the data entering the systems comes from the operational environment, which is still true for most current situations.” With the exponential growth of the Internet in general and the World Wide Web (WWW) in particular, however, this tradition has been contested, because the Web is shifting the emphasis of data warehousing efforts from internal data to external data. The challenge is where and how to collect external data from the Web, and how to handle it effectively and efficiently. Typically, data used by data warehousing systems is extracted from various files and databases of initially separated information systems such as a procurement system, a production management system, a payroll system, etc. There are two types of data resources for data warehouses: operational systems and an external environment. The operational systems are 672

Integrating Web-Based Data into a Data Warehouse

Exhibit 2.

Web-Based Data Warehousing

the primary information source, while the external data, composed of news, statistical data, and government archives, is optional. Exhibit 1 briefly depicts the working mechanisms of a data warehouse system. As the amount of information available on the Web has expanded explosively in recent years, organizations can no longer ignore the Web as a substantial source of valuable information. Data warehousing and the Internet are two key technologies that can offer potential solutions for managing corporate data more effectively (Chen and Frolick, 2000). The Web and data warehousing together could form a powerful and highly successful partnership (Gardner, 1997; Hackathorn, 1997; Chen and Frolick, 2000; Kimball and Merz, 2000). Generating dynamic pages from Web-enabled databases and using client-side technologies to manipulate data locally to the browser create opportunities for organizations to enhance information visibility. For example, publishing warehouse data via the intranet, extranet, and Internet has become a highly productive approach that combines Web delivery mechanisms with the decision support capability of data warehousing (see Exhibit 2). To date, the emphasis has predominantly been on one-way data flows from the data warehouse to the Web (Corbin, 1997; Wilson, 1997). In this way, the data warehouse provides information and the Internet offers a presentation platform. The Internet is the selected distribution channel for data warehouse information due to its widely accepted computer standards, thereby providing accessibility to users. Chen and Frolick (2000) have identified several compelling advantages of “Web-based data warehousing,” including ease of access, platform independence, lower operating costs, and so on. Web-based data warehousing makes the corporate data warehouse and its application easily accessible from any computer with Internet connectivity. The Web-based data warehouse provides a Web-

673

LEVERAGING E-BUSINESS OPPORTUNITIES centered approach to extensibility and ease of access to information. Because the Web is built to certain computer standards, including TCP/IP for communications, HTTP for application navigation, and HTML for display capabilities, users adopting any computing platform can access vital business information. Moreover, Web-based data warehousing reduces establishment and management costs by offering a thin-client solution. Recently, however, another trend has emerged. Organizations are increasingly interested in how to integrate Web data into data warehousing systems, not simply presenting existing warehouse data to the Internet. There are several forces pushing this trend. First, external data is highly essential to organizations, especially in enhancing decision making effectiveness. Organizations are facing internal and external pressures for newer, better, and more timely information beyond the range and capability of internal data in assisting sound decision making. Managers are encouraged to think externally, even globally. Second, the Internet provides a viable and low-cost platform for organizations to collect worldwide information. Third, data warehousing is quite a mature technology that can accommodate large amounts of information extracted from the Internet. The benefits of putting highly business-related Web content into an organization’s information systems have been noticed by practitioners and researchers (Hackathorn, 1997, 1998). However, combining data warehouses and the Web is still new to people, and no systematic guidance for accomplishing this integration exists. Web integration is a systematic means of combining data warehouses and the Internet. Our purpose is to discuss the feasibility and significance of Web integration. The discussion begins with the analysis of the value of Web data to business and organizational performance. It is followed by discussion of the challenges and potential problems facing Web integration. After pointing out that the critical step of Web integration is to correctly and efficiently evaluate Web data, a framework of analyzing Web information is proposed. At the end, the implications of Web integration to researchers and practitioners and some recommendations are presented. WEB INTEGRATION We define Web integration (WI) as the systematic methodology of selecting, analyzing, transforming, and loading Web data into data warehouse systems (Exhibit 3). WI is a complex technology involving data warehousing, Internet, data mining (particularly multimedia data mining), and search engine technologies. After conducting WI, external data (principally from the WWW) of a data warehouse system should be equal to or even more than internal data in terms of both quality and quantity. WI is a powerful toolkit to manage and take advantage of Web data. The difficult part 674

Integrating Web-Based Data into a Data Warehouse

$SSOLFDWLRQ 6HUYHU

:HE ,QWHJUDWLRQ 'LUHFWRU\

:HE 7RRONLWV

'DWD :DUHKRXVH

'DWD :DUHKRXVLQJ

:HE 6HUYHU

:HE 6HUYHU

,QWHUQHW,QWUDQHW RU([WUDQHW

Exhibit 3. Web Integration Architecture

of WI is how to surf the Web quickly, discover and acquire useful pieces of information that possess business value, and reorganize those pieces into a data warehouse allowing future retrieval. Web toolkits are used to automatically surf the Web and load information from selected Web sites into data servers. Web toolkits have search engine capabilities but they are not limited to this functionality. Implanted automatic mechanisms allow Web toolkits to search Web sites and pages over the Internet autonomously and deliberately, based on criteria and rules about the Web sites defined in the WI directory. A WI directory reflects the business interests and value orientation of an organization toward Web data. It entails a hierarchical structure for Web domains composition — the portion and proportion of information from such domains as .com, .edu, .org, and .net per organization needs. It also maintains a list of Web sites Web toolkits need to visit regularly for timely information. A WI directory must be updated by Web analysts routinely to accommodate changes of the business world. Web toolkits extract Web data from targeted Web sites into a Web server where data is scrubbed, transformed, and finally integrated into a data warehouse system. The Internet removes time and distance constraints for data collection. The Web acts as a massive database, although the format and structure of this database is not homogeneous and in fact is a mixture of a variety of formats. As for data warehousing, enlarged and valuable data input maximizes the performance of data warehouses. The principal task of WI is to extract valuable data from the Web. To a certain degree, the genuine value of Web data brought to organizations determines the value of Web integration. The next section discusses the value of WI, including the abundance of informa-

675

LEVERAGING E-BUSINESS OPPORTUNITIES tion on the Web, the value of Web data to business, cost reduction for data warehousing, available toolkits for implementing Web integration, current and timely information, ease of use, and so on. ADVANTAGES OF USING WEB DATA Organizations use the Internet to publish business related information for their various audiences because of its advantages as a publication medium over other regular media, such as reduced cost, manageability, speed, and wide coverage. Electronic commerce (EC) ties organizations with the Internet even tighter. Business-to-consumer (B2C) and business-to-business (B2B) EC activities here led to the publication of enterprise information on the Web such as organizational mission, products, shop locations, order tracking, delivery methods, service and maintenance, and so on. Information published by for-profit, nonprofit, and public organizations makes the Internet a potential information treasure. Internet data is rich in content and wide in time range. The newest data as well as historical data are available simultaneously on the Web. We can expect Web data to be new for several reasons. First, companies have to update their Web sites fast under competition pressures. Some Web sites, such as news, weather forecast, and financial reports, must be updated every few minutes or seconds to stay valuable. Second, there are fewer restrictions on Web publication than other media such as newspapers, magazines, or television. Web media such as registered URLs, servers, and applications are organizations’ own assets. Organizations have direct control on how and when to use these assets. Compared with other types of information resources, Internet data is more readily available to data warehouses. In the past, people had to go to libraries, research centers, and government departments to check materials in various media formats, including paper, microfiche, and video/audio archives to collect outer data. On the Internet, with just a few mouse clicks, people can surf around the world and find the information they want. No other media but the Internet can offer such quickly accessible and complete information in computer-discernible format. Therefore, it is simpler to load Web data than other media-based data into data warehouses, which reduces the cost of processing, transforming, and loading data in Web integration processes. BUSINESS INFORMATION ON THE WEB Drucker (Hackathorn, 1997) admonishes IT executives to look outside their enterprises for information. He remarks that the single biggest challenge is to organize external data because change occurs from the outside. He further predicts that an obsession with internal data can lead to organizations being blindsided by external forces. As markets become turbulent, 676

Integrating Web-Based Data into a Data Warehouse the old way of doing business with data only from internal operational systems becomes even less effective. A company must know more about its customers, suppliers, competitors, and other external factors than ever before. It must enhance the information from internal systems with information about external factors. Much of this external data is readily available on the Web. Online public libraries (e.g., Library of Congress, ACM digital libraries) are outstanding examples, and so are online public services. An important consideration in the acquisition of data for data warehouses is cost. Building data warehousing systems requires immense capital and human resources investment. According to a survey by MetaGroup in 1999, the average 1997 data warehouse project (with staffing costs) costs $1.9 million to implement. Inmon (1998) estimates that, on average, 80 percent of the time needed to build a data warehouse will be spent on extracting, scrubbing, transforming, and loading data. Upon completion of the data warehouse, the work of data collection will be continued and sometimes expanded to keep up with the business needs. Tremendous labor and related costs are involved with data processing operations. Web integration has the potential to significantly decrease the cost of data processing due to several advantages associated with the Internet. Most Web data is free to use; although some commercial databases available via the Internet will still be able to charge substantial access fees due to the high data quality, these sites typically account for only a small portion of the Web. In addition, WI operation will not increase labor and financial investment to a prohibitive amount. As will be discussed in the next section, many technologies are already developed that can help reduce the cost of integrating Web data into data warehouses. AVAILABLE TOOLKITS FOR WI WI can become a labor-intensive operation without appropriate technologies and software support. Commercial software products aiming to capture data from the Web and load data directly into data stores are available. These software tools retrieve data from the Web page and incorporate the relevant data into new HTML documents or business applications. A “pull” technology in contrast to “push” is commonly used by these tools, also referred to as “parallel” or “intelligent pulling.” This is a way to systematically utilize the Web as one massive database and query the relevant data from the Web (Wilent, 1997). For instance, by using Web Automation Toolkit from webMethods, Inc., DHL built its Web-based shipment tracking systems that allow its employees or customers to post inquiries about the status of a particular package. In addition, the toolkit also makes Web data available to DHL itself. The database for the company service information contains data about locations to which DHL sends packages. This database also holds postal codes, currency exchange rates, holidays, and other facts 677

LEVERAGING E-BUSINESS OPPORTUNITIES about cities and countries DHL serves. The toolkit and its Automation Engine component help DHL keep such vital data current. Instead of manually tracking currency exchange rates, DHL uses the Automation Engine to scan Web pages that contain such information and pull in the most upto-date rates. The Automation Engine, a Java application, runs on Web servers and generates HTML documents from templates that point to other Web addresses (Wilent, 1997). The Automation Engine can be considered a representative WI tool in the market. Search engine technology advances quickly and has become more robust than ever before, which can be useful for WI operation. However, it has become more and more difficult to find the exact information from a large number of Web pages. Search engines may retrieve confusing results because each search engine has its unique sorting rules and mechanisms on keywords and thus returns search results that could be totally different from those of another one. Each search engine has its own search domains and niches that make it unique and of use to a particular user group. Therefore, to get accurate information from the Web, better searching skills are needed as well as profound knowledge of search engines. As in any other IS, Web integration requires human intervention. Web integration creates a demand for Web analysts in an organization. Using a Web analyst instead of a large number of knowledge workers to search external data can achieve outstanding results. The Web analyst would bring expertise to the WI searching process and increase efficiency in surfing the Web for business related information. The Web analyst, first of all, is a highly skilled business analyst who has a solid understanding of the business (Hackathorn, 1997). The Web analyst must be a specialist in search engines, Web browsing, and Web data retrieval. The Web analyst’s job is to locate highly relevant business Web sites correctly and quickly, with feedback expected within one or two months once WI starts. With this feedback from Web analysts, a directory hierarchy (namely, WI directory) of Web sites can be established. This hierarchy will determine how many and what combination of .com, .edu, .org, and .gov sites should be used. Organizations are able to download Web information from these Web sites into their data warehousing systems routinely. CHALLENGES AND CAVEATS FOR WEB INTEGRATION The Internet never is a perfect world. There are many potential problems in using Web information. These problems may lower the effectiveness and efficiency of WI. Koehler (1999) depicts the Internet as a “world brain,” but he also warns that this brain has a “short memory” and “changes its mind a lot,” which implies that Web data is not stable and thus might not be reliable. Moreover, various data formats in the Web impose a challenge to the communicating capacity of the network and the processing ability of 678

Integrating Web-Based Data into a Data Warehouse server processing units that conduct WI operations. Other problems of Web data form potential barriers to organizations’ effective accomplishment of WI. Various data formats make it difficult to track and retrieve Web information flow. The Internet is a multimedia environment mixed with text, graphics, tables, pictures, animation, motion video, and audio (midi, voice). Java-enabled Web browsers allow almost any data format possible on the Web. Multimedia and multiple data types make the Internet an interesting and attractive place, but problems arise when people try to integrate data in a variety of formats from different Web sites into a single data warehousing architecture. Web data is also not formatted for data warehouses. Images and sounds can contain a large amount of hidden content and may not be discernible to all browsers. Loading text, images, and video into the same field of a relational database table does require advanced technologies and creates challenges to Web integration. However, data format discrepancy between Web data format and data warehouse structure is not an insurmountable barrier. Two remedies are plausible. First, toolkits and software applications are available to convert Web data into data warehouse format. HTML documents, pictures, and audio/video objects can be transferred into an acceptable format to data warehousing systems with these toolkits. Second, the star schema is not the sole structure that data warehousing systems can adopt for data structures. Object-oriented data warehousing (OODW) has gained popularity among researchers and practitioners. In OODW, the data item can be an object instead of the traditional data field type, such as text, number, currency, and memo. An object can be of any type — pure text, HTML file (as much as a whole Web page), Java applet/script, audio/video recording, or pictures. Some leading data warehouse vendors have begun product research in this field. The paradigm of the Web is radically different from that of data warehousing. One of the outstanding characteristics of data warehouses is that data warehousing deals with nonvolatile data collection (Inmon, 1996). On the contrary, Web data is volatile and ephemeral, as there is no official or industrial standard for Web page structure and presentation. Hyperlinks link to everywhere with little discipline. The Web’s diversity often challenges an individual’s imagination and nurtures an appreciation for new forms of creative expression. Another characteristic of Web data is inconsistency of access. The most common reason for “failure to respond” messages were “No DNS” (no domain name server) and the “400” errors (page not found or access denied). The “no DNS” error can occur whenever any of the following things happen: the Web site changed and therefore the Web page had 679

LEVERAGING E-BUSINESS OPPORTUNITIES become extinct or the server may be down for any reason. The “400” errors occur when Web pages, but not the site, are removed, eliminated, or edited. Koehler (1999) conducted a thorough survey of Web site permanency and consistency. In his survey, after a six-month period, 12.2 percent of the Web sites and 20.5 percent of the Web pages collected for the study failed to respond when queried, as did 17.7 percent and 31.8 percent, respectively, after one year. Also, it was found that more than 97 percent of the Web sites underwent some kind of change over the six-month period. After one year, more than 99 percent of the web sites had changed. Furthermore, different types of Web pages, Web sites, and domains behave differently. Because Web information is transient, Web integration should be a dynamic process, rather than a static one. There are methods to deal with the pitfalls of changing Web data. First, the Web analyst can routinely check the WI directory of Web sites to guarantee information availability and currency from selected sites. Second, WI toolkits can be configured to track any possible changes of Web data in the directory. Yet another challenge to WI is the possible increase in operating costs due to the trend toward the copyrighting of Web sites. The copyrighting of Web pages would increase the operational costs of Web integration. The copyrighting of Web sites, pages, and data is a hot topic that has sparked intensive discussion. As the Internet becomes a progressively more valuable information resource for the public, organizations feel the need to protect their Web information from abuse or illegal use by resorting to legal copyrights. Although only a small portion of Web sites is under copyright protection, the trend toward copyrighting seems to be increasing. However, Web copyrighting is controversial because violation of Web-page copyright is hard to prove. For example, the author of a book or an article can be assured that the material is copyright-protected, but if the author develops a Web page and places the same article on the Internet, is that work protected (Kirshenberg, 1998)? Internet copyrighting may increase the operation cost of WI. However, we can expect to get high-quality information from copyrighted sites and pages. DETERMINING THE VALUE OF WEB DATA TO BUSINESS Information overload as a common phenomenon in IS systems has long been predicted by researchers (Davis and Olson, 1985; Wetherbe, 1991). Too much information, especially irrelevant information, can weaken decision-making efficacy and decrease organizational performance. Chen and Frolick (2000) argued that most companies today are “data rich” but “information poor” as their ability to manipulate data and to deliver information lags far behind the growth rate of the data. Information overload can get worse if organizations collect too much data from the Internet without knowing the quality and characteristics of the data. The mission of Web 680

Integrating Web-Based Data into a Data Warehouse

7LPH 7LPHOLQHVV &XUUHQF\ )UHTXHQF\ 7LPH3HULRG

&RQWHQW $FFXUDF\ &RPSOHWHQHVV 5HOHYDQFH &RQFLVHQHVV 6FRSH 3HUIRUPDQFH

)RUP 3UHVHQWDWLRQ &ODULW\ 'HWDLO 2UGHU 0HGLD

Exhibit 4. O’Brien’s Information Quality Model

integration is not to transform data warehouses to a repertoire of the Internet but to strengthen decision support quality of data warehousing by expanding antennae into the Internet. It is a critical step for Web integration to determine the business value of Web data effectively and efficiently. Only after acquiring high-quality data from the Web will WI be able to provide appropriate decision support for organizations. Data quality has been intensively studied (Klobas, 1995; O’Brien, 1996; Wang and Strong, 1996; Hackathorn, 1998; Orr, 1998; Rieh and Belkin, 1998; Tayi and Ballou, 1998; Wang, 1998). Researchers have established various frameworks and standards to evaluate information quality from various perspectives. For instance, Wang (1998) identified four roles related to information “products”: (1) information suppliers, (2) information manufacturers, (3) information consumers, and (4) information product managers. He stressed that just like other products, information product also has quality. Four categories and associated dimensions are identified to assess information (product) quality or IQ: (1) intrinsic IQ (accuracy, objectivity, believability, and reputation); (2) accessibility IQ (access, security); (3) contextual IQ (relevancy, value-added, timeliness, completeness, amount of data); and (4) representational IQ (interpretability, ease of understanding, concise representation, and consistent representation). According to Wang (1998), accuracy is merely one of the four dimensions of the intrinsic IQ category. He argued that representational and accessibility IQ emphasizes the importance of the role of information. O’Brien’s model (1996) is one of the most comprehensive data quality assessment models. According to the model in Exhibit 4, data quality can be evaluated from three aspects: time, form, and content, each of which is a multidimensional construct. 681

LEVERAGING E-BUSINESS OPPORTUNITIES A study conducted by Rieh and Belkin (1998) demonstrates that people assess information quality on the basis of source credibility and authority at either the institutional or the individual level. After scrutinizing former information quality studies, as well as summarizing characteristics of information problems and strategies on the WWW, Rieh and Belkin (1998) identified seven criteria for evaluating Web information quality — source, content, format, presentation, currency, accuracy, and speed of loading. Furthermore, they point out that quality and authority are indeed important to people searching the Web. However, Hackathorn (1998) argues that “acquiring content from the Web should not reflect positively or negatively on its quality.” He suggests Web resources be considered in terms of quality and coverage. Thus, he is able to categorize commercial, government, and corporate Web pages into different quadrants of the matrix created by quality and coverage dimensions. According to Rieh and Belkin (1998), the dimension of data quality is one of the most important pertaining to Web data. However, in Web integration, several other dimensions listed are relatively less critical. For example, Web data format is no longer detrimental to WI as discussed previously. The speed of loading can be improved by toolkits available to Web integration that collect Web data automatically during nighttime. A data quality model for Web integration should take the characteristics of both the Web and data warehousing into consideration. Data quality in WI can be evaluated on two dimensions — quality and coverage. Quality and coverage are quantifiable and important to data warehousing. The quality dimension guarantees that information downloaded from the Web is business-relevant and valuable to organizations. In WI, Web data quality is relative and dependent on who is using the data. Even the same Web page may possess different value to two different departments, companies, and industries and, therefore, possesses different quality. Organizations themselves define the quality of Web data based upon their own needs and their Web analysts’ judgment. The coverage dimension refers to the extent to which Web data covers certain information domains and the relevance of the Web data to a business’s purposes. In the following section, a data quality framework is proposed to analyze and evaluate Web data to further assist the successful operation of WI. A FRAMEWORK FOR DATA QUALITY IN WEB INTEGRATION Based on the previous discussion, a data matrix framework of Web data evaluation for WI is composed of the two dimensions — quality and coverage. Quality can be measured with Rieh and Belkin’s (1998) criteria for Web information: 682

Integrating Web-Based Data into a Data Warehouse 1. Source — where information comes from. For example, education (.edu) and government (.gov) sites are generally assumed to have better quality and to be more reliable. Judging from Web sources, users and WI analysts can assign certain credibility to Web pages by authoritative institutions and rate them higher in terms of information quality. 2. Content — what is in the document. Usefulness is a suitable measure of good Web content. 3. Format — the formal characteristics of a document, e.g., how the pages are presented and how the information itself is structured. The format of Web pages can influence the downloading speed of the page from Internet in WI operations and thus influence the efficiency of WI. Web pages with pure text can be downloaded faster than pages filled with pictures, video, and animation. 4. Presentation — how a document is written. Writing “style” differentiates this item from the “format” issue above. 5. Currency — whether a document is up-to-date. Currency of Web data is of significant value in aiding organizations to make quick and correct decisions. 6. Accuracy — whether the information in a document is accurate. 7. Speed of loading — how long it takes to load a document. Waiting too long for a single page from a slow server is a waste of time and a sacrifice of WI efficiency. Web analysts should avoid selecting slow Web sites into the WI directory. Coverage of Web data can be measured by the following criteria: (1) scope — the degree of relevance of the data to the business, which is also a relative measurement by which the organization defines its own standard, and (2) variety — in which type of media format and in how many different types of media formats is the data presented. Zorn et al. (1999) stress text mining technologies to Web content as a promising method for accessing information on the Web. Organizations should select pages within their technology process ability. Highly complicated Web pages eat up time, computer, and human resources. The matrix in Exhibit 5 entails rich information and can be used to assess Web data values from three angles: dynamic nature, quadrant characteristic, and decision support capability. First, it considers the dynamic nature of Web pages. Because the Internet is a dynamic database, it is necessary to keep track of the changes in Web pages selected into the WI directory. Second, data is categorized into two levels in each dimension, low versus high quality and narrow versus broad coverage. Thus, the framework divides Web data into four quadrants, each of which refers to certain quality data. Third, Web data in each different quadrant of this matrix has a different role to play in decision-making support. 683

\Q '\QDPLF7UHQG DP LF 7 UH QG

,,,

,9

'\QDPLF7UHQG

/RZ

,G H

DO

'

4XDOLW\

+LJK

LEVERAGING E-BUSINESS OPPORTUNITIES

,

,,

1DUURZ

%URDG &RYHUDJH

Exhibit 5. Web Data Matrix Model (Dynamic Trend)

Dynamic Nature Ideally, Web pages are expected to grow from the lower stage (Quadrant I, low quality and narrow coverage) to higher stages (Quadrant IV, high quality and broad coverage). However, in fact, the quality of Web data varies in various ways. For example, in dynamic trend 1, organizational sites expand significantly wider in coverage after years’ efforts and experience in the Internet while retaining the same quality. Dynamic trend 2 represents those sites that focus on leveraging information quality without bothering to change the coverage. Our framework has considered the dynamic nature of Web data. It does not matter which way Web pages are compared, either at the same or different points in time. The framework can be used to compare multiple pages at one time point or compare the quality and value of the data from the same site in a series of time points. It is necessary to note that data of the Web can occupy all the quadrants in the framework. Quadrant Characteristics Web data in each quadrant has unique data characteristics. Information in Quadrant III and Quadrant IV is of high quality measured by accuracy, currency, meaningful content, and so on. Data in Quadrant II and Quadrant IV has wider coverage regarding scope and number of media pages. For instance, in Quadrant IV, the commercial online databases from Dialog Information Services and similar vendors have traditionally supplied businesses with high-quality information about numerous topics (Hackathorn, 1998). Library Services, digital libraries, and meta-discovery services fit in this quadrant. However, the complexity of using these services and the 684

,,,

7DFWLF4XDGUDQW '660,6

,9

6WUDWHJLF4XDGUDQW (,6(66

/RZ

4XDOLW\

+LJK

Integrating Web-Based Data into a Data Warehouse

,

"

,,

6XSSRUWLYH4XDGUDQW *'66*66

1DUURZ

%URDG &RYHUDJH

Exhibit 6.

Web Data Matrix Model (Decision Support Capabilities)

infrequent update cycles may limit their usefulness (Hackathorn, 1998). In Quadrant III, government databases have become tremendously useful in recent years. Previously, public information often was available only by spending many hours of manual labor at libraries or government offices (Hackathorn, 1998). Corporate Web sites often contain vast amounts of useful information in white papers, product demos, and press releases. Company-specific content providers, IT trade publications, investment services, and government agencies are categorized in this quadrant (Hackathorn, 1998). The value of Quadrant II Web data is not in the quality of any specific item, but in its diversity. In combination with the other Web resources, Hackathorn (1998) commented that the “flaky content” in Quadrant II acts as a “wide-angle lens” to avoid tunnel vision of one’s marketplace. Some search engines and some news agencies belong to this type, such as general news agencies, general discovery services, and meta-discovery services (Hackathorn, 1998). Quadrant I information can be ignored for Web integration purposes. Most individual Web pages belong to this type. Decision Support Capabilities The functionality of Web data in any one quadrant of the matrix varies in terms of decision support functions (see Exhibit 6). Quadrant IV data can be used to support strategic decisions due to its high quality and broad scope and thus Quadrant IV can also be called a strategic information 685

LEVERAGING E-BUSINESS OPPORTUNITIES quadrant. Web data meeting Quadrant IV quality can be injected into EIS/ESS systems. The information in Quadrant III has a narrower focus, with relatively high quality, and therefore, it is suitable to assist middlelevel managers such as production managers or marketing managers. This quadrant could be called operation/tactics information quadrant. Information from this quadrant best suits DSS/MIS systems requirements. According to previous analysis, Quadrant II information has value in its diversity; however, the data from Quadrant II needs to be combined with information from Quadrants III or IV due to its low quality. Solely using information from Quadrant II can be risky. Quadrant II information can be useful input, for example, to marketing managers who can utilize this type of information to identify market trends and formulate marketing strategies. Therefore, Quadrant II data is suitable input for GDSS/GSS systems. Quadrant II data can even be used to support strategic decision making because strategic decision making needs broad information and sometimes the quality is not so vital (Mason and Mitroff, 1977). It is necessary to mention that there is actually no clear-cut boundary between areas of information in these four quadrants in terms of decision-supporting functions. Undoubtedly, high-quality information, as well as wide-coverage information, is essential for strategic decision making. Senior managers cannot base their strategic decision making on a single point or a single vein of information, but they could better do so on a broad information plane. They must synthesize manufacturing, marketing, finance, competition, and human resource information to make viable strategic decisions. Web sites that can be categorized into Quadrant IV should always be selected into the WI directory. However, there may not be enough Web sites qualifying for Quadrant IV with sufficiently high quality and broad coverage. To a certain extent, a combination of Quadrant III Web pages can be used in lieu of Quadrant IV pages because Quadrant III possesses high quality but narrower coverage data. The combination of Quadrant III Web sites can compensate for the deficiency of their coverage. Web analysts can choose several leading financial information Web sites as resources to support strategic financial decision making or several leading manufacturing Web sites as resources for production management support. Web integration provides two means of utilizing and digesting Web resources. First, a WI directory in Web integration systems allows managers to directly click on these hyperlinks to visit Web sites to acquire information themselves. For instance, production managers can use ISO Web sites to find product standards, advanced manufacturing technologies, and the like. Second, Web analysts locate, select, and summarize data from preselected Web sites and load the data into data warehousing systems. The data can be further used by other IS systems such as EIS/ESS, DSS, GDSS/GSS, etc. For example, the elegant interface and drill-down feature 686

Integrating Web-Based Data into a Data Warehouse makes EIS the most powerful access tool for data warehouses (Murtaza, 1998). With rich information from data warehouses, EIS becomes more attractive to senior managers (Sprague and Watson, 1996). With richer external information from Web integration, EIS offers more accurate information, broader vision, and fresher ideas than ever before. CONCLUSION AND IMPLICATION The Internet can be viewed as the largest external database accessible to most organizations. Web integration, which combines data warehousing with Internet technologies, is a systematic method of using Web data to improve organizational decision-making. The business value and advantages of using Web data have been discussed in detail. We delineate the working mechanism of Web integration, in which a WI directory and Web analysts have important roles to play. The critical step for successful Web integration is to accurately evaluate the business value of the Web data. Effectively determining Web data quality is the critical success factor of WI and helps to avoid the information overload that commonly happens to IS systems. A framework is presented for evaluating data quality in Web integration. Web integration is attractive to practitioners who want to exploit the potential of Web resources for business purposes and maximize the capability of data warehousing systems, including IT managers, business analysts, data warehouse designers, and Web application vendors. WI is useful to all levels of management, especially those using data warehousing and data mining to support their decision making. WI represents a new trend and a promising direction for the data warehousing industry, data warehouse designers, and IT managers. The future of data warehousing, according to a report from the MetaGroup, can be expressed in the formula “DW+ROLAP+WWW=$$$” (Gardner, 1997). The first wave of Web development is over, and it is high time for Web integration. WI will challenge researchers with deeper issues concerning information refinement and knowledge management. Web integration “…will be an agent of change to the controlled and structured world of data warehousing. This is a necessary change—a maturing of the basic objectives of data warehousing into a more comprehensive system of knowledge management for the enterprise.” (Hackathorn, 1997)

The future of Web integration is promising, but it is still in the early stage. There are many potential problems to be solved. One problem is how to integrate various types of Web data into a homogeneous data warehouse schema. Second, the automatic collection of data, eliminating the need for manual rework, will decide the efficiency of Web integration. Automatic Web data collection relies strongly on multimedia data-mining tech687

LEVERAGING E-BUSINESS OPPORTUNITIES nology. Multi-media data-mining methods and algorithms deserve intensive research efforts. Third, the potential role and impact of WI on knowledge management of an organization is still unknown. All these issues deserve further research effort, but the payback should be abundant. References Castelluccio, M. (1996). “Data Warehouse, Marts, Metadata, OLAP/ROLAP, and Data Mining – A Glossary,” Management Accounting, 4(78), pp. 59–61. Chen, L.D. and Frolick, M.N. (2000). “Web-Based Data Warehousing,” Information Systems Management, Spring, pp. 80–86. Corbin, L. (1997). “Data Warehouses Hit the Web,” Government Executive, 29(2), pp. 47–48. Davis, G.B. and Olson, M. (1985). Management Information Systems: Conceptual Foundations, Structure, and Development, 2nd ed., McGraw-Hill Book Company. DeSanctis, G. and R.B. Gallupe (1987). “A Foundation for the Study of Group Decision Support Systems,” Management Science, 33(5), pp. 589–609. Gardner, D.M. (1997). “Cashing in with Data Warehouses and the Web,” Data Based Advisor, p. 60. Hackathorn, R. (1997). “Farming the Web,” Byte, 22(10), p. 43. Hackathorn, R. (1998). “Roughing the Web for Your Data Warehouse,” DBMS, 9(11), p. 36. Inmon, W.H. (1996). “The Data Warehouse and Data Mining,” Communications of the ACM, 39(11), pp. 49–50. Inmon, W.H. (1998). Data Warehouse Performance, New York, John Wiley & Sons, Inc. Internet Software Consortium (2001). Available at: http://www.isc.org. Kimball, R. and Merz, R. (2000). The Data Webhouse Toolkit: Building the Web-Enabled Data Warehouse, New York, John Wiley & Sons, Inc. Kirshenberg, S. (1998). “Info on the Internet: User Beware!” Training & Development, 11(52), p. 83. Klobas, J.E. (1995). “Beyond Information Quality: Fitness for Purpose and Electronic Information Resource Use,” Journal of Information Science, 21(2), pp. 95–114. Koehler, W. (1999). “An Analysis of Web Page and Web Site Constancy and Permanence,” Journal of the American Society for Information Science, 50(2), pp. 162–180. Mason, R.O. and Mitroff, I.I. (1973). “A Program for Research on Management Information Systems,” Management Science, 19(5), pp 475–487. Murtaza, A.H. (1998). ““A Framework for Developing Enterprise Data Warehouses,” Information Systems Management, pp. 21–26. NetSizer (2001). Available at: http://www.netsizer.com. O’Brien, J. (1996). A. Management Information Systems: Managing Information Technology in the Networked Enterprise, 3rd. ed., Burr Ridge, IL, Richard D. Irwin. Orr, K. (1998). “Data Quality and Systems,” Communications of the ACM, 41(2). Rieh, S.Y. and Belkin, N.J. (1998). “Under-standing Judgment of Information Quality and Cognitive Authority,” Proceedings of the 61st Annual Meeting of the American Society for Information Science, 35, pp. 279–289. Sprague, R.H. and Watson, H.J. (1996). Decision Support for Management, Upper Saddle River, NJ, Prentice-Hall. Tayi, G.K. and Ballou, D.P. (1998). “Examining Data Quality,” Communications of the ACM, 41(2).

688

Integrating Web-Based Data into a Data Warehouse Wang, R.Y. and Strong, D.M. (1996). “Beyond Accuracy: What Data Quality Means to Data Consumers,” Journal of Management Information Systems, Spring, 12(4), pp. 5–34. Wang, R.Y. (1998). “A Product Perspective on Total Data Quality Management,” Communications of the ACM, 41(2). Wetherbe, J. (1991). “Executive Information Requirements: Getting it Right,” MIS Quarterly, 15 (March), pp. 51–65. Wilent, S. (1997). “Pulling Packages from the Web,” Databased Web Advisor, p. 32. Wilson, L. (1997). “Weaving a Web Warehouse Supplement,” Software Magazine, July, pp. 13–14. Zorn, P., Emanoil, M., Marshall, L., and Panek, M. (1999). “Mining Meets the Web,” Online, 23(5), pp.16–28.

ACKNOWLEDGMENTS We thank Dr. Ted E. Lee and Dr. Mitzi G. Pitts for their review and insightful comments.

689

This page intentionally left blank

Chapter 55

At Your Service: .NET Redefines the Way Systems Interact Drew Robb

There is no doubt that the Internet has pro-foundly affected peoples’ lives and the way they conduct business. But what has been seen so far is just the beginning. Tim Berners-Lee, founder of the World Wide Web and currently head of the World Wide Web Consortium (www.w3c.org), says that the Internet will really come into its own when machines are able to talk to each other, something that he calls the Semantic Web. This differs greatly from the current Web infrastructure, which is geared toward human-machine interaction. A person downloads a document, reads it, and clicks on any relevant links to get further data. Much of the data one needs is not accessible via Web pages, but exists in databases or other formats, inaccessible by Web robots and other machines. Certain databases, of course, are online, such as checking movie or flight schedules, but most are not. And what is accessible requires manual searches. The Semantic Web, on the other hand, involves structuring that data in such a way that applications can interact without human intervention. In some ways, it would be like having a global relational database. “The concept of machine-understandable documents does not imply some magical artificial intelligence which allows machines to comprehend human mumblings,” says Berners-Lee. “It only indicates a machine’s ability to solve a well-defined problem by performing well-defined operations on existing well-defined data. Instead of asking machines to understand people’s language, it involves asking people to make the extra effort.”

0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

691

LEVERAGING E-BUSINESS OPPORTUNITIES A form of this type of interaction has existed for the past few decades with electronic data interchange (EDI), but it is expensive, cumbersome, and consequently its use has been severely limited. This has given rise to the concept of Web services — a mix of programming and data existing on a Web server that provides users with some type of information or service. These already exist, to some extent, whenever one checks a stock quote or the weather, for example. But such services are not universally available and require one to go from one site to another and individually access the data because each site has its own data format. The goal is therefore to have all the data existing in a common structure so that data can be shared. For example, say one wants to find out the best way to spend a Saturday night. Currently, one would go to one site to check movie schedules, another to check concerts, another to check sporting events, and, if nothing strikes one’s fancy, one would pull out TV Guide. Well, each of these options has certain characteristics in common. They each have an event (movie, concert, game, or show), a time, and a location so they could all be put into a common structure and compared side by side. One’s personal calendar also has the event time and location functions. With Web services, one should be able to go to one’s calendar, see that one has some time free on Saturday, and click a button to locate entertainment options. The software then performs a search of the possible choices and gives a list of entertainment that fits one’s schedule. One clicks on the one of interest, that data goes into one’s calendar, and one is immediately connected to the event’s Web site for purchasing tickets. When one buys tickets, that expense is automatically logged in one’s personal finance program. In a business context, this would mean that when a company ships a product, the event is not only recorded in the seller’s ERP system, but the data is also sent to the customer’s system for tracking. In addition, the seller’s suppliers could receive the data for automatic inventory replenishment and production planning purposes. Eventually, this may evolve into one seamless virtual supply chain. THE NEED FOR STANDARDS One of the first thoughts to strike someone reading about .NET is that most, if not all, of the touted .NET/Web services functionality exists currently to one degree or another. Quite true. However, the problem is in the lack of standards that enable these types of services to seamlessly interact on a “plug-and-play” basis. There are, therefore, four standards that have been developed for Web services and that form the basis for .NET. These are: 1. XML. The first necessary ingredient is eXtensible Markup Language, which is a formal recommendation of the World Wide Web Consor692

At Your Service: .NET Redefines the Way Systems Interact tium. XML is a way of exchanging information over the Web by creating common information formats and then sharing both the data and the format that describes that data. It is an extensible format, meaning that entities can create their own markup tags. For example, an entity could create a tag “price” and then follow that by an item’s price. Any other program that was set to recognize the “price” tag would then know how to use the data that followed. A shopping robot would use it for price comparison, a billing program would plug it into an invoice, etc. 2. SOAP. Simple Object Access Protocol (SOAP) is a method that allows one program to communicate with another program via HTTP. It works whether the two programs are operating on the same or different operating systems. Originally developed by Microsoft, DevelopMentor, Inc., and Userland Software, Inc., SOAP has joined XML as a standard of the World Wide Web Consortium. SOAP lets computers call each other and exchange information despite the existence of a firewall, because most firewalls allow HTTP to pass through. 3. UDDI. Universal Description, Discovery and Integration is an XMLbased Web services “yellow pages.” Organizations list their publicly available Web services in the UDDI Business Registry. Then other computers can access this Web site to find who has the service they need. An individual organization can also use the UDDI standard to create its own services directory for internal use. More than 300 companies have now signed on to use UDDI. In addition to the technology companies, the list includes firms such as Ford Motor Company and Dow Chemical. 4. WSDL. The fourth standard is Web Services Definition Language, an XML-based language UDDI uses to describe a Web service in a manner such that other computers can locate it and interact with it. It is derived from SOAP and IBM’s Net-work Accessible Service Specification Language (NASSL). In January 2002, the World Wide Web Consortium set up the Web Services Description Working Group, which is developing this language. There are other standards needed to make Web services interoperable. Microsoft, for example, has developed Discovery of Web Services (DISCO), a type of XML document used for querying a particular URL to discover what Web services it offers. Security, in particular, is an issue with any type of Web service, so companies are developing specifications in this area. Microsoft, for example, has its Passport service for identifying online users and, in September 2001, Sun Microsystems, Cisco Systems, NTT DoCoMo, General Motors and others founded the Liberty Alliance Project (www.projectliberty.org) to

693

LEVERAGING E-BUSINESS OPPORTUNITIES develop a competing identification standard. Then, on April 11, 2002, Microsoft, IBM, and VeriSign jointly announced a new specification called WS-Security, a set of SOAP extensions for exchanging secure, signed messages in a Web services environment. Microsoft and IBM, which own the patents to some of these standards, have not issued them as royalty-free standards. Although they are not currently charging fees for their use, they may at some time decide to start collecting money for their usage. .NET COMPONENTS .NET consists of four different software elements, and Microsoft is developing each of these. 1. Smart client software. This involves adding XML functionality to the operating system for any type of client device. Microsoft has three products in this area. Windows XP, released in October 2001, is designed for desktop and laptop computers. Windows CE .NET replaces Windows CE 3.0 and allows small footprint devices such as PDAs and wireless devices. It supports Bluetooth and 802.11 wireless transmission protocols. Windows XP Embedded is a componentized version of XP designed for use in smart devices such as set-top boxes, retail kiosks, and thin clients using x86 architecture processors. Developers can select which Windows components they want to install in the device. XP Embedded requires 5 MB of RAM while CE .NET can get away with only 200 KB. Microsoft is also developing a specialized version of Windows XP, called Windows XP Tablet PC Edition, to accompany the Tablet PC that the company released in late 2002. 2. .NET enterprise servers. With these, Microsoft incorporates deeplevel support of XML so that enterprises can take existing legacy applications and integrate them with newer technology rather than having to replace them. These servers also form the basis for hosting Web services. The .NET Server family includes Application Center 2000, BizTalk Server 2000, Commerce Server 2000, Content Management Server 2001, Exchange Server 2000, Host Integration Server 2000, Internet Security and Acceleration Server 2000, Mobile Information Server 2001, SharePoint Portal Server 2001, and SQL Server 2000. Before the end of 2002, Microsoft had planned to release the Windows .NET Server Windows .NET Server operating system as a replacement for Windows 2000 Server. 3. Developer tools. Microsoft’s Visual Studio .NET provides rapid application development tools for creating Web services. It supports the Visual Basic, C++, and C# programming languages in an Integrat694

At Your Service: .NET Redefines the Way Systems Interact ed Development Environment. It automatically creates the XML and SOAP interface to translate an existing application into a Web service. 4. Web services. The final element of Microsoft’s strategy is the creation of its own Web services. The first one was the Passport user identification system. The company has released its second service, MapPoint .NET Version 2.0, a programmable platform for mapping of location-based services. Hosted by Microsoft, it has worldwide maps, street addresses, and driving directions for the United States, Canada, and 11 Western European countries. The company has been using MapPoint 1.0 for its own sites (Expedia, Carpoint, HomeAdvisor) since September 2001, but 2.0 is the first publicly available version of the software. In the meantime, there are plenty of real-world examples of .NET in action. Here are just a few examples of organizations that are already using parts of .NET: • Microsoft’s travel site, www.expedia.com, utilizes MapPoint .NET. Go to the Web site and click on the “Maps” tab. • Dollar Rent-a-Car uses .NET XML Web services to connect its legacy mainframe reservation system with its Web site (www.dollar.com) as well as establishing connections with tour operators’ systems. • The Ohio Department of Education is conducting a pilot application using Visual Studio.NET and SQL Server 2000 to demonstrate that the technology is capable of handling all student scheduling and grade card reporting for the entire state. • NASDAQ.com is developing a system using .NET Alerts to provide customized information to its users. Users specify which market indicators to monitor, the trigger point at which each alert is to be generated, such as when a stock hits a certain price, and the preferred method of delivery — via cell phone, e-mail, or Microsoft Windows Messenger. • The State of Pennsylvania uses Microsoft Passport on its Pennsylvania Power portal (papower.state.pa.us/PAPower/) to verify users in interactions with state agencies, including access to personal healthcare data. IMPLEMENTING WEB SERVICES While a lot still needs to be sorted out in terms of technology and standards, Web services are coming. Microsoft placed .NET at the center of its overall corporate strategy. If you are already a Microsoft shop, as long as you keep upgrading your operating systems, applications, and development tools, you will have all the basics for implementing your own Web ser-

695

LEVERAGING E-BUSINESS OPPORTUNITIES vices or taking advantage of Microsoft’s. Every new product coming out of Redmond is designed to work with Web services, and the old ones will not be available for much longer. But Microsoft is far from being the only company working on Web services. IBM (Armonk, New York), Sun Microsystems, Inc. (Palo Alto, California), and BEA Systems, Inc. (San Jose, California) are actually all further along in the area. But Microsoft brings such a strong developer community and end-user base to that table that it appears to have an advantage over other approaches. Sun and IBM, for example, also have Web services available, based on Sun’s Java 2 Enterprise Edition (J2EE). Theoretically, all these Web service type products (whether from Microsoft, Sun, or IBM) will work together because they are based on the same standards. But in the real world, the vendors are accusing each other of adding proprietary extensions to make the services work better with their own products. At this point, it is too early to tell how that battleground will turn out. If one wants to get started on implementing .NET right away, the place to start is by reviewing the resources on Microsoft’s Web site. Go to www.microsoft.com/net to find a simple overview of the .NET platform, as well as links to technical data needed for evaluating .NET and determining the steps necessary to use it in one’s own enterprise. Editor’s Note: See also Chapter 34, “J2EE versus .NET: An Application Development Perspective.”

696

Chapter 56

Dealing with Data Privacy Protection: An Issue for the 21st Century Fritz H. Grupe William Kuechler Scott Sweeney

On January 11, 2001, the attorney generals of 44 states and the Federal Trade Commission (FTC) submitted their legal opinions to the Bankruptcy Court of Boston that the online children’s toy store, Toysmart, a subsidiary of the Disney Corporation, should be prevented from selling its customer list while it was in bankruptcy status. Prior to entering bankruptcy, Toysmart had indicated in their privacy policy that no data would be released to other companies. The primary legal basis for this opinion was that the Children’s Online Privacy Act of 1998 requires parental consent for the release of data pertaining to the concerned children. The day after the FTC lawsuit, Disney stepped in and helped dispose of the controversy by paying $50,000 to obtain the list and keep it confidential. The reaction of the American and European audiences to this minor case illustrates the wide gulf in cultural–political perceptions concerning the protection of privacy data. To Americans the case was a great success on two points. First, existing congressional legislation ensured an “adequate” level of protection of the personal data involved. Second, the parent firm was self-regulated enough to prevent dispersion of the information by making a financial sacrifice. However, to appalled Europeans, especially the data protection commissioners of the EU member countries, the case is evidence in favor of their own point of view. If it takes 44 state attorney 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

697

LEVERAGING E-BUSINESS OPPORTUNITIES generals and the FTC to block the actions of one minor courtroom of America, how could this company possibly ensure “adequate” personal data protection on the Internet?1 DATA PRIVACY IN THE EUROPEAN COMMUNITY The European Commission (EC) Data Protection Directive and similar comprehensive legislation found in other nations are a culmination of work passed down over the ages. Some may even argue traceability to other conventions on human rights or even Articles 4 and 11 of John Stuart Mills’ The Declaration of the Rights of Man and of the Citizen. As early as 1968, the first European Working Party of EC ministers met to consider data protection rights and to determine whether existing human rights conventions and domestic laws were sufficient to protect data privacy. In 1981 the ministers signed Convention 108, “Convention for the Protection of Individuals with Regard to Automatic Processing of Personal Data.” More than 24 countries signed Convention 108. Chapter 3, Article 12 of the Convention governs transborder data flows (TBDFs); and it allows nations to block data flows to another party if there is not equivalent privacy protection. Considering that Convention 108 was developed with the lowest common denominator in existing data protection laws (and in the case of Italy and Greece, no applicable laws whatsoever), one may have divined at the time that this was not the end of the matter. The EC issued a particulary strict draft concerning data protection in September 1990. Simitis describes the long process from draft to signature that lasted over five years.2 As with many battles within the Council of Ministers, lines were drawn between the centralizing power of Brussels and the sovereign powers of the member states. Lost in the political rhetoric was whether it was even wise, despite the good intentions, to set up a huge new bureaucratic body within member states to review applications, collect fees, regulate and monitor compliance, and impose penalties, for example, in the rigid model of France’s Commission Nationale de l’Informatique et des Libertés (CNIL). Eventually, this colorful legislative process led to Directive 95/46/EC, October 24,1995, “Protection of Individuals with Regard to the Processing of Personal Data and on the Free Movement of Such Data” and Directive 97/66/EC, December 15, 1997, “Concerning the Processing of Personal Data and the Protection of Privacy in the Telecommunications Sector.” Signatories to the Data Protection Directive 95/46/EC were required to comply by October 24, 1998. Transborder data flow (TBDF) refers to “a complex set of issues that have come to the forefront as a result of the electronic transfer or exchange of information across national boundaries. TBDFs involve the flow of digital information across borders for the storage or processing in foreign com698

Dealing with Data Privacy Protection: An Issue for the 21st Century puters.”3 TBDFs outside the 15 EU member countries are regarded as a “transfer of information to third countries.” DATA PRIVACY PROTECTION PRINCIPLES As of March 1, 2001, a major, new factor entered into the way in which international companies must treat data on their employees and customers. This factor is the Data Protection Directive of 1998 (DPD), which took force at that time. It became mandatory to enact the European Directive into national legislation in 1998. Each EU country is, of course, sovereign; and since it is the national law that applies, each country’s laws must be harmonized with regard to principles outlined in the Directive. The DPD controls how, within the European Union, data regarding living individuals can be acquired, stored, transmitted, and processed; and these controls have considerable impact not only on computerized records but also on data stored in all formats, including paper. The DPD has a significant impact on all companies doing business in the 11 member countries of the EU, even if their home office is in the United States or any other country outside of the EU. With some variation, the United Kingdom has enacted similar requirements. Information about data subjects is regulated according to eight data protection principles that guide the formulation of more specific procedures and standards that control how data will be collected, who can be involved in the processing of data, and how data can be used and distributed. The DPD grants individuals, who are sometimes referred to as data subjects, broad rights previously not available, including the right to seek compensation for damages incurred by violations of these principles. One set of principles in relation to the quality of data require that data be: • Processed fairly and lawfully. • Collected for specified, explicit, and legitimate purposes and not further processed in a way incompatible with those purposes. Further processing of data for historical, statistical, or scientific purposes shall not be considered as incompatible provided that member states provide appropriate safeguards. • Adequate, relevant, and not excessive in relation to the purposes for which they are collected or for which they are further processed. • Accurate and, where necessary, kept up to date; every reasonable step must be taken to ensure that data that are inaccurate or incomplete, having regard to the purposes for which they were collected or for which they are further processed, are erased or rectified; • Kept in a form that permits the identification of data subjects for no longer than is necessary for the purposes for which the data were collected or for which they are further processed. Member states shall

699

LEVERAGING E-BUSINESS OPPORTUNITIES lay down appropriate safeguards for personal data stored for longer periods for historical, statistical, or scientific use. In addition, there are principles that guide how data regarding the data subjects may be processed. Member states shall provide that personal data may be processed only if: • The data subject has given his consent unambiguously; or • Processing is necessary for the performance of a contact to which the data subject is party or in order to take steps at the request of the data subject entering into a contract; or • Processing is necessary for compliance with a legal obligation to which the controller is subject; or • Processing is necessary in order to protect the vital interests of the data subject; or • Processing is necessary for the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller or in a third party to whom the data are disclosed; or • Processing is necessary for the purposes of the legitimate interests pursued by the controller or by the third party or parties to whom the data are disclosed, • Except where such interests are overridden by the interests or fundamental rights and freedoms of the data subject which require protection under Article 1(1). Some 60 countries have either implemented the data privacy protection principles in law or are considering doing so. Jacqueline Klosek provides a comprehensive summary of the DPD as enacted by the various EU member states.4 “Due to the fact that European Union member states were given considerable freedom in transposing the Data Protection Directive into national law, there are some disparities among the national measures that have been adopted in the various member states. As such, entities dealing with personal data in Europe will need to be well acquainted not only with the European Directive but also with the implementing legislation of the 15 member states.” The United Kingdom’s Data Protection Act is probably the most harmonious with America’s position on data protection issues, seeking for the time being no more than compliance with the convention’s minimum requirements. If the U.S. Congress decides to adopt legislation compatible with the EU Data Protection Directive, it may want to start with the United Kingdom’s regulatory model. The information commissioner (formerly data protection commissioner) maintains the public register of data controllers throughout the country, and the registration form can be accessed online or by telephone. The fee for notification is £35, renewed annually, 700

Dealing with Data Privacy Protection: An Issue for the 21st Century with any changes made within 28 days. Organizations must register if they are processing personal information for the following purposes:8 • • • • • • • • • • • • • • • • •

Private investigation Health administration Policing Crime prevention and prosecution of offenders Legal services Debt administration and factoring Trading or sharing in personal information Constituency casework Education Research Administration of justice Consultancy and advisory services Canvassing political support among the electorate Pastoral care Provision of financial services and advice Credit referencing Accounts and records

The following organizations are exempt from notification if they are processing for the following purposes: • Personal, family, and domestic • Public registers • Not-for-profit organizations (small clubs, voluntary organizations, church administration, and some charities) • Core business purposes (small business) • Staff administration • Advertising, marketing, and public relations (for one’s own firm only) • Administration of customer and supplier records However, data controllers must comply with the Data Protection Act even if they are exempt from formal notification. TBDFs are covered on the purpose form, and the data controller of the organization must indicate whether personal data is transferred outside the European Economic Area (EEA). The names of the countries are listed unless over ten are indicated (worldwide). Posting information on a Web site constitutes worldwide transfer. What has been the effect of the Office of the Information Commissioner (OIC) on business activities? From middle 2000 to middle 2001, the OIC investigated almost 900 complaints and secured 21 convictions.5 Most of the convictions were for workers abusing their data processing duties and tapping into a firm’s database for their own ends. The OIC’s biggest worries have little to do with the processing done by private firms; but governments plan to put a greater amount of their services online, police records, 701

LEVERAGING E-BUSINESS OPPORTUNITIES and European plans to make Internet service providers retain customer data for up to seven years for security purposes. The OIC also administers the Regulations of Investigatory Powers Act of October, 2000, which regulates how employers monitor employee e-mails and Internet access. While most United Kingdom firms seem to be unaffected by the data protection requirements thus far, there are some major differences with regard to data operations and electronic commerce compared to firms in the United States. For example, unlike in the United States, it would be illegal for United Kingdom corporations to sell customer data without first informing those on the list.6 A similar issue has surfaced in the field of medical research. For example, due to the Data Protection Act, many hospitals are no longer automatically passing on information to the United Kingdom’s National Cancer Registry, one of the best in the world.7 Legally, the confidential information can only be supplied with the patient’s consent. The British government is now considering revising the regulations because “the benefits to public health research of passing on data without consent (may) outweigh the rights of patients to have a say.” The European position, now taken up by numerous other countries, can be characterized as being: • Protective of personal rights with respect to data about individuals • Restrictive regarding the flow of personal data out of the country of origin, except to other countries honoring the data privacy principles (DPP) • A view of the people as citizens who are in control of their personal data • Regulated by general laws, principles, procedures, and standards adopted to oversee the collection of data by governmental agencies established for this purpose DATA PRIVACY IN THE UNITED STATES The American position, on the other hand, can be characterized as being: • Unprotective of data about individuals collected by businesses and government • Unrestricted flow of data among companies • A market-driven view of people as consumers under which data is seen as a saleable, usable commodity that belongs to the corporation • Reliance on self-regulation by companies to respect an individual’s privacy • Regulated by specific pieces of legislation (i.e., by sector) that relate to particular aspects of privacy, but not to privacy generally The United States has not had general privacy protection safeguards in place that are equivalent to those enacted in conjuction with the DPP. The 702

Dealing with Data Privacy Protection: An Issue for the 21st Century Exhibit 1.

United States Legislation Offering Some Privacy Protection

Fair Credit Reporting Act (1970) Privacy Act (1974) Family Education Rights and Privacy Act (1974) Rights to Financial Privacy Act (1978) Electronic Funds Transfer Act Cable TV Privacy Act (1984) Electronic Communications Privacy Act (1986) Video Privacy Protection Act Driver’s Privacy Protection Act (1994) Children’s Online Privacy Protection Act (1998) Gramm–Leach–Bliley Act (1999) Hospital Privacy Protection Act

United States has enacted specific legislation with regard to particular problem areas. Often, this has left open the question of privacy in other sectors and problem areas. Naturally, groups such as Computer Professionals for Social Responsibility, the Privacy Coalition, and the like have called for broader protections. Exhibit 1 displays a few of the major pieces of legislation that bear on some aspect of privacy. By the time the EC’s DPP took effect, online sales had tripled from about $3 billion in 1997 to approximately $9 billion in 1998.8 Prior to the directive, most information systems managers considered restrictions on transborder data flows as one of their most minor issues.9 In addition to the adjustment companies make to United States laws, self-regulatory efforts such as the TRUSTe and BBBOnLine Privacy Seal programs, intended to reduce consumer concern over privacy, are emerging to identify Web sites that voluntarily agree to follow privacy policies. These approaches rely heavily on the willingness of companies to apply for ratings, to maintain clearly identified policies, and to police themselves. There have been complaints that the seals of approval offered by these agencies are not rigorously enforced. Two days after the final Data Protection Directive compliance date, United States Secretary of Commerce, William Daley, announced that the EU and the United States would not interrupt data flows, at least for the time being. However, worried about what constitutes an “adequate” level of protection, many influential Americans adopted an alarmist viewpoint about the directive. Information Technology Association president Harris Miller warned that, “This kind of policy could grind the flow of transborder data to a halt … we urge the Clinton Administration to engage the European Commission and its member states, as well as the business communities on both sides, to engage in an in-depth discussion of the implemen703

LEVERAGING E-BUSINESS OPPORTUNITIES tation of the directive, its implications upon transborder data flow (TDBF), and, most importantly, already existing practices and policies….”10 Some writers warned of endangerment to the Organization for Economic Cooperation and Development guidelines, major barriers to world trade,11 or even a trade war.27 Ohio State University law professor and former chief counselor for privacy in the United States Office of Management and Budget, Peter P. Swire, provided a thorough analysis of how the directive may affect information technology architectures, intranets, extranets, e-mail, the World Wide Web, human resource records, call centers, the financial services sector, medical practices, smart cards, travel reservation systems, and so on.12 At the time this chapter was written, Swire counseled American executives that “a potential way to comply with some of the directive’s requirements is to move data processing operations, and the accompanying jobs, into Europe.” After two years of discussions with the European Data Privacy Commissioners, the United States Department of Commerce negotiated the Safe Harbor Principles on July 18, 2000. It is posted on the department’s We b s i t e ( h t t p : / / w w w. i t a . d o c . g o v ) a n d t h e F e d e r a l R e g i s t e r (http://www.access.gpo.gov). Under the Safe Harbor Principles (see Exhibit 2), United States companies can self-certify online that they are fulfilling EU rules on data privacy (http://Web.ita. doc.gov). Self-certification needs to be revalidated every 12 months. Among the many firms that have already self-certified are Microsoft Corporation, Hewlett-Packard, and Dun & Bradstreet. SAFE HARBOR Over 125 companies, including Intel and Microsoft, have signed Safe Harbor agreements. This refers to a program to be used by firms in countries where personal data processing practices have been deemed inadequate to protect privacy. It was mainly implemented to accommodate the firms in the largest market in the world, the United States. Negotiated by an EC working party and representatives of the FTC in March 2000, it seeks to reduce the uncertainties of trans-Atlantic data operations so that affected firms will be able to continue their current operations. Safe harbor firms “will then be protected from any arbitrary action by European data protection authorities to cut off data to their companies.”13 The seven principles of the Safe Harbor program are notice, choice, onward transfer, security, data integrity, access, and enforcement. Notice indicates that individuals must be notified about how the data will be used. Choice is the ability to opt out or in. Onward transfer extends the information about how the data will be used by transfer to another company. Security restricts the transfer to companies that adhere to the DPPs. Data integrity relates to data accuracy. Data access requires companies to make the data held on an individual is 704

Dealing with Data Privacy Protection: An Issue for the 21st Century Exhibit 2. 1.

2.

3. 4.

5.

6. 7.

Seven Principles of Data Privacy under Safe Harbor

1RWLFHIn clear language, at the time individuals are asked to provide personal information or as soon thereafter as possible, data processing companies must inform individuals about the purposes for which it collects information, how to contact the organization regarding inquiries or complaints, the types of third parties to which it discloses the information, and the choices available for limiting its use and disclosure. &KRLFHData processors must offer individuals the opportunity to opt out of data collection, and individuals must receive information on how their personal information is used by or disclosed to third parties. For collecting sensitive information (i.e., medical and health information, racial or ethnic information, political opinions, religious or philosophical beliefs, trade union membership, or sexual information), the individual must give an affirmative or explicit (opt-in) response. 2QZDUG7UDQVIHU Data processors may only disclose personal information to third parties when this is consistent with the principles of notice and choice. 6HFXULW\Data processors that create, maintain, use, or disseminate must take reasonable measures to ensure the protection of personal information. This includes efforts to protect it from loss, misuse, unauthorized access, disclosure, alteration, and destruction. 'DWD,QWHJULW\Data processors may only process personal information relevant to the purposes for which it has been gathered. To the extent necessary for those purposes, an organization should take reasonable steps to ensure that data is accurate, complete, and current and should avoid secondary uses. $FFHVVIndividuals must be given reasonable access to data held about them, and they should be able to correct or amend that information if it is inaccurate. (QIRUFHPHQWMechanisms should be in place to ensure compliance with these principles. This includes methods of recourse for individuals to whom the personally identifiable data. Mechanisms must be available to support affordable mechanisms by which an individual’s complaints and disputes can be investigated and resolved and, perhaps, damages awarded. Data processors must have procedures for verifying that the assertions businesses make about their privacy practices are effectively and truly presented. Sanctions must be in place to ensure compliance by the data processor.

available to that person. Enforcement requires corporate adherence to the DPPs. Some American multinational corporations may find the Safe Harbor principles more burdensome and expensive than doing automatic processing directly from subsidiaries in the European countries. Registration for Safe Harbor is with the United States Department of Commerce. Safe Harbor operations in Europe are not strictly an either–or proposition. The Euro Commission recently authorized transfers of personal data outside the EEA provided such transfers were protected by a Data Transfer Agreement that included (or resembled) the model clauses published on the EU DP Web site. Like Safe Harbor, these clauses restrict the data importer (usually in the United States) to protect the data to the same level as in Europe. However, it is less burdensome insofar as one can have a data transfer agreement for a particular transfer of a set of data for a particular purpose — whereas Safe Harbor means you have self-certified your entire operation as commensurate with European standards of protection. 705

LEVERAGING E-BUSINESS OPPORTUNITIES Yet, somewhat more flexibility is permitted when data subjects give their informed consent to allow their data to be transferred. However, companies must take action to ensure that the subjects’ permissions are not coerced in any way and that permission is freely given with full knowledge of the consequences of their permission. ISSUES TO BE CONSIDERED United States and international companies must decide for themselves how seriously they need to take DPP requirements into account. These are some of the issues they should consider. Data Accuracy In many countries, IT systems, governmental and private, are not as advanced as they are in Europe and the United States. Nor is there necessarily the will or the money available to bring them to current standards. Hence, for some corporations, data privacy takes a back seat to bringing the systems to the point where they can accurately indicate whether a person is a citizen, is eligible for retirement benefits, or has actually received a warranty for a product purchased. To Which Standards Should a Company Adhere? Although the basic principles for DPP are the same in the various countries that have adopted these principles, the precise method for implementing them is not consistent. In France, for example, every database created must be individually approved; while in the United Kingdom compliance is assumed, but the database must be registered. In Germany, the company must have a chief privacy officer who is personally responsible for compliance with the principles. This becomes a problem when companies are international in scope and need to adhere to the requirements of all the countries. At some point, total adherence becomes impractical. Some privacy authorities say that a company must try its best, be reasonable, and hope that it is within the intent and spirit of the principles to avoid prosecution. Some companies engaged in medical data gathering carefully considered the privacy laws of the EU before settling on the country it believed to have the easiest framework within which to operate. Which Databases Need to be Approved and Reviewed? It is not always clear which data are personally identifiable and need to be registered and protected. For example, does the data collected with active badges, GPS localization devices, or wireless telephone location tracking fall into this category? If databases are combined, does the resulting linked 706

Dealing with Data Privacy Protection: An Issue for the 21st Century database become a new database, thereby requiring a new approval? Generally, all information that is personally identifiable is included: text, images, voice, unique identifiers, and e-mail addresses. Is Consent Possible? It is possible that some people whose data is compiled are too old, too young, too sick, or already dead and therefore unable to give permission to have their data shared or used for secondary purposes. Can surrogates give this consent? Because this study is governmentally approved, the question has been raised as to whether legislation supersedes a right of privacy. Keeping Track of the Laws The state of data privacy protection continues to unfold, both in the context of the countries that have already adopted this legislation and are modifying the precise techniques in use, as well as in additional countries that are adopting the principles. Continuing Debate The rancor between the United States and the EC will probably continue for some time despite the Safe Harbor Principles. Both the U.S. Congress and the EU Council of Ministers have declared victory on this subject. However, in March, 2001, the Bush Administration raised the issue of model contract clauses, worried that they will become mandatory.14 In addition, the chairman of the Belgian Privacy Commission, Paul Thomas, recently used the term e-paternalism in a discussion of electronic ways to simplify government administration and to denote an attitude of governmental supervision of companies to protect the individual.15 This is an unfortunate term to be used in the presence of American decision makers who do not feel that the proper role of the government is to provide for the citizens’ information technology needs without giving them responsibility. Thus far, fewer than 150 companies have signed the Safe Harbor Agreement, although this number includes some very large technology companies such as Intel, Microsoft, and Hewlett-Packard. United States Enforcement Some U.S. firms and government officials may be under the misapprehension that they can simply self-certify the Safe Harbor Principles with a wink and a nod and then go on operating as they did before. EU officials have stated that lip service to personal data protection is not enough. They expect the FTC to conduct due diligence on these Safe Harbor applications and to enforce the principles vigorously. Registration may be only the first step. In April 2001, the U.S. Department of Commerce announced plans to 707

LEVERAGING E-BUSINESS OPPORTUNITIES hire a privacy advisor to monitor compliance and “ensuring that we’re not dropping cookies or Web bugs or doing things that people might consider a violation or their privacy.”16 The Department’s plans won immediate praise from house majority leader Richard Armey (R–Texas). Cost of Extending Safe Harbor Principles to Web Sites According to Robert Hahn, director of the American Enterprise Institute–Brookings Joint Center for Regulatory Studies, the total cost of pending American privacy legislation could be as high as $36 billion.17 After consulting with 17 information technology consulting companies, the study concluded that $100,000 in labor and other costs would be necessary to provide customized software to marketing-active Web sites (about 10 percent of active Web sites or 361,000). This includes the ability for users to access their own personally identifiable information online and correct it as necessary. In addition, the consensus was that off-the-shelf software products would be too difficult to integrate for most Web sites. However, the research has been criticized by American data privacy expert Peter Swire, who could not come up with a useful estimate himself. Length of Maintenance It is not always clear as to how long companies can retain data. RECOMMENDATIONS FOR UNITED STATES COMPANIES Secure Top-Level Leadership Recognizing Privacy Protection as an Important Goal Adherence to the DPP principles is not a simple task, requiring the time and expertise of many people. To the extent that systems must be modified and perhaps completely reconstructed, it is also expensive. To do so requires that the president and board understand why there is a need to comply and to understand what adherence demands of the company. Toplevel decision makers should also be aware of the privacy concerns of their users, clients, and customers and of the need (and value) to address these concerns. Appoint a Chief Privacy Officer (CPO) A person with privacy protection responsibilities must be appointed in some countries, but even without this requirement, a company must have someone whose job it is to track privacy law developments, submit approval requests, and communicate with the DPP authorities in appropriate countries. This officer oversees corporate actions aimed at avoiding violations of the laws to which it is subject. The CPO and the compliance committee have especially important roles to play when the company is 708

Dealing with Data Privacy Protection: An Issue for the 21st Century engaged in a merger with another company that may not have been subject to, but is now subject to, DPP laws. Establish a DPP Compliance Team A group of higher-level executives from functional areas across the company should be formed to work with the CPO in order to ensure the infusion of DPP principles across the company. With the CPO, the compliance team would oversee policy setting and the development of corporate policies, initiate training on privacy principles, evaluate privacy issues arising in the business, and ensure that outsourcing contractors are also in compliance. They should also seek to determine what privacy issues are of concern to their customers and employees and determine whether they can synchronize the company’s need for data with the individuals’ desires and rights for privacy. Among the policy issues that need to be addressed are those dealing with the identification of disclosure controls, staff training, and the limits on usage of the data by internal staff. The compliance team will also make certain that changes in the DPP rules and regulations are tracked and that changes to local systems are made as needed. Other issues can arise that require response. If users, clients, or customers request access to their data, the firm must be able to respond quickly, accurately, and without special effort. If a data subject seeks to claim compensation for the misuse of data, it is equally important that the company be able to respond quickly. Even if a firm exists almost entirely in the United States, it may need to meet DPP standards if it has a Web site that collects data from foreign nationals and, hence, is transferring data from other countries into the United States. Using foreign languages on your site invites the more focused interpretation that you are targeting a particular country or set of countries. Part of the education of the compliance team may well be attendance at annual meetings of the data privacy protection commissioners who convene once a year to review changes and emerging needs. This public forum enables privacy staff to interrelate and communicate with their peers at other organizations to see how adherence is being structured. Additional information and guidelines can be obtained from such privacy organizations as the Better Business Bureau Online (BBBonline), TRUSTe, Computer Professionals for Social Responsibility (CPSR), and the like. Obtain Legal Assistance The rules and regulations related to the DPP principles vary in their specifics from country to country. They are complex, and your company may well require legal assistance. These rules and regulations change periodi709

LEVERAGING E-BUSINESS OPPORTUNITIES cally and can be vaguely written; so having a legal expert as a resource is generally required. A small company, in particular, would have an especially difficult time with this task, as would a company involved in particularly sensitive data handling, such as one involved in medical research or in the treatment of patients. Inventory and Audit Databases with Personally Identifiable Data Many companies are not aware of how much data is collected, how and by whom it is being used, and whether this data collection is consistent with corporate policy and corporate mission. Neither is it always known whether persons without authorization have access to the data held in these databases. Audits of security measures should be undertaken as well to ensure that only authorized persons operate in accordance with the DPP principles and under nondisclosure policies. Companies will have to examine data centers in multiple settings to see that their data contents, data processing, and transfer procedures are not in violation of the DPP principles. Remember that the databases in question include personnel as well as customer and client databases. Ensure adequate security. The DPP principles hold liabilities not just for willful disclosure but also for unauthorized data releases. Systems should be designed for secure internal staff access and for the prevention of incursions by hackers and people who lack authorization. As new systems are developed, the CPO of the firm should ensure that all users of corporate data have input into ethical and legal characteristics of the systems. Think through design requirements and specifications to ensure that the highest levels of security and protection exist. Another issue to be addressed is that of opt-out and opt-in choices. Giving users, clients, and customers the option of having data held on them is a strong possibility if one’s intent is to be truly open with them. If this is the case, make it clear that this is an option and make it equally clear how one can opt out entirely or simply withhold some unneeded data. Naturally, there exists the possibility that some of your practices are in violation of the DPP principles. If so, implement changes immediately to bring your data collection, manipulation, and transfer policies into alignment with the principles. Recognize that Privacy Regulations Are Not an Excuse Not to Collect Data It is true, of course, that the data privacy protection requirements impose work on the companies that collect such data. Clearly, one cannot simply assert that a company cannot collect personal data. The requirements do not eliminate the possibility of doing so. They do force companies to make a conscious decision as to whether they have a need to collect the data and 710

Dealing with Data Privacy Protection: An Issue for the 21st Century then to identify means of informing the individuals involved as to their rights. Focus on Consent In many respects, the end result of the DPP principles turns out to be little different in the EU than it is in the United States. Much data is still collected and used, for instance, for marketing purposes. The greater visible difference is the emphasis that DPP-adherent countries place on informed consent. Their companies cannot be so secretive in their operations that the alert consumer would not know what permissions they are transferring to the companies with the divulgence of personal data. Be Aware that a Web Privacy Policy Does Not Adequately Address the DPP Principles Such a policy may go part of the way to establishing whether your company qualifies under the principles or under Safe Harbor provisions; but in and of itself, it is not necessarily sufficient either to fully inform users of their options or to satisfy the requirements for DPP compliance. Many media must be addressed to ensure that all affected parties are aware of and, hopefully, in support of the privacy policies in place. Consider Applying under the Safe Harbor Provisions A company engaged in processing data in a DPP country might choose to sign the Safe Harbor agreement. Other alternatives exist. One alternative is contracting with a company in that country to collect and process the data. This is facilitated by the development of model contractual agreements to guide such collections and processing. Another strategic move is actually moving a data center into the EU because it is easier to move data out of the EU than it is to move it in from the United States. Yet another alternative is running one set of data systems in DPP countries and another in the United States. Consider Implementing P3P on Your Web Sites The Platform for Privacy Practices (P3P) is an emerging XML standard that automates the exchange of data between a site and its visitors. Users of P3P are able to indicate to their browser which data they choose to share as they visit Web sites. Sites requesting this data receive the data automatically as the user moves onto the site. Keep Abreast of Emerging Issues New technologies introduce new opportunities for both business improvements and public-relations disasters. As new hardware and software technologies are considered for implementation, the privacy team must deter711

LEVERAGING E-BUSINESS OPPORTUNITIES mine whether new policies are needed and whether it should take actions to avoid the collection and misuse of data. This is especially true when the technologies are or could be used for employee and activity surveillance. In this respect, technologies such as XML, processor and IP address checking, new browsers, and wearable processing chips come to mind. SUMMARY The DPPs and the laws guiding their implementation are not a transitory phenomenon. As soon as a company is engaged in international data practices, it becomes subject to the laws implementing the principles. It is incumbent upon IT managers to make sure that corporate executives understand the differences between United States and European (and other national) perspectives and laws. As the CPO of Experian Corporation observed, “Compliance is not a cost to the business, but a legitimate cost of doing business.” References 1. Gentot, Michel (April 2001). J. 23rd Intl. Conf. Data Prot. Commissioners. France: Commission Nationale de L’Informatique et des Libertés. No. 2., p. 1. 2. Simitis, S. (1995). From the market to the polis: the EU data protection directive on the protection of personal data. Iowa Law Rev. 80(3), pp. 445–469. 3. Smith, Kent A. and Patricia E. Healy. Abstract from transborder data flows: the transfer of medical and other scientific information by the United States. Inf. Soc. 5(2). Accessed on 7/12/2001. http://www.slis.indiana.edu. 4. Klosek, Jacqueline (2000). Data Privacy in the Information Age. Westport CT: Quorum Books. Chapter 4, pp. 51–127 5. Ward, Mark (July 11, 2001). The Problems of Protecting Privacy. BBC News Online. Accessed on 8/29/01. http://news.bbc.co.uk. 6. Trading in Information. BBC News Online. Accessed on 8/29/01. http://news.bbc.co.uk. 7. Medical Research In Peril. BBC News Online. Accessed on 8/29/01. http://news.bbc.co.uk/. 8. Remarks of Secretary of Commerce William M. Daley, February 5, 1999. http://204.193.246.62/public.nsf. Cited from Klosek, Jacqueline (2000). Data Privacy in the Information Age. Westport CT: Quorum Books. p. 2. 9. Deans, C.P., Karwan, K.R., Goslar, M.D., Ricks, D.A., and Toyne, B. Key (1991) International IS issues in the U.S.-based multinational corporations, J. Manage. Inf. Syst., 7, 4 (1991), 27–50. Accessed on August 28, 2001. http://www.terry.uga.edu. 10. Business Wire (October 24, 1997). 11. Vastine, Robert (July 30, 1997). Battling over data privacy. J. Commerce. 12. Swire, Peter P. and Robert E. Litan (October 21, 1997). None of Your Business: World Data Flows, Electronic Commerce, and the European Privacy Directive. Interim report issued for a conference of the Brookings Institution. Accessed on August 28, 2001.
712

Dealing with Data Privacy Protection: An Issue for the 21st Century 14. U.S. raises issues with EU over data privacy laws. (March 28, 2001). Reuters. Accessed on 4/16/01. http://news.excite.com. 15. Thomas, Paul (July 2001). J. 23rd Intl. Conf. Data Prot. Commissioners. France: Commission Nationale de L’Informatique et des Libertés. No. 3., p. 1. 16. Thibodeau, Patrick (May 1, 2001). Senators call for addition of federal CIO.” Computerworld. Quote is from a Commerce Department spokesman. Accessed on 8/28/01. www.computerworld.com. 17. Johnston, Margret (May 14, 2001). Study: Privacy Proposals Could Cost Billions. IDG. Accessed from CNN.com on 8/28/01. www.cnn.com.

713

This page intentionally left blank

Chapter 57

A Strategic Response to the Broad Spectrum of Internet Abuse Janice C. Sipior Burke T. Ward

Internet access continues to expand, with September 2001 estimates of 407.1 million Internet users worldwide1 and an estimated 165.2 million users in the United States alone.2 The explosive growth of Internet use for information access, file transfer, e-mail, collaborative work, banking, shopping, and performing countless other functions has brought about an escalation of concerns. The advantages of quick access to timely data and less restricted communications resulting from Internet connectivity have been accompanied by reduced consumer confidence in Internet-related business activities, the risks of financial loss, and legal liability from sources both external and internal to the organization. Organizations have responded by creating the position of Chief Privacy Officer (CPO). The primary responsibility of the CPO is to protect online consumer privacy by developing the organization’s privacy policy and ensuring compliance with privacy laws and regulations. There are currently about 350 CPOs in the United States according to Alan Westin, who founded the International Association of Corporate Privacy Officers.3 Westin projects that the number of CPOs will rise to between 500 to 1000 this year as organizations respond to new privacy-related laws and regulations.4 For example, three significant new laws and regulations imposing privacy standards in the United States went into effect November 1, 2000. Included among these are the Gramm–Leach–Bliley Financial Modernization Act, directed at consumer pri0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

715

LEVERAGING E-BUSINESS OPPORTUNITIES

Exhibit 1.

Sources of Threats to Internet Content and Use in Business

vacy in the financial services industry; the Health Insurance Portability and Accountability Act, which ensures patients’ medical records are private; and the United States Department of Commerce’s Safe Harbor list, intended to assist United States corporate compliance with the European Union Data Protection Directive. While appropriate for issues of consumer privacy, does the new CPO position provide adequate consideration of the increasing risks facing online organizations? The first line of defense in responding to increasing risks resulting from inappropriate Internet activities is to raise the awareness and understanding of what the risks are and how they might arise. This chapter first discusses the various types of misconduct and the risks organizations face. The scope of considerations is far-reaching. Certainly, this discussion is not comprehensive. Any effort to address all possible threats is never-ending, as online activities are limited only by the imaginations of an increasingly Internet-savvy workforce and worldwide population. The chapter concludes by proposing the strategic response of an expanded role for the CPO. In addition to overseeing an organization’s Internet privacy issues, the role of CPO should encompass Internet integrity. Thus, this individual’s realm of responsibility would encompass not only privacy-related laws and regulations but also online content liability and the interpretation of laws and regulations as applied to the Internet. 716

A Strategic Response to the Broad Spectrum of Internet Abuse Exhibit 2. Sources and Example Threats to Internet Content and Use in Business External Sources Competitors Copyright infringement Intellectual property loss Trade secret loss Consumers Invasion of privacy Criminals Defaced Web sites DDoS attacks Hacking and cracking Malicious code Theft

Both External and Internal Sources Business Partners Regulations violations Cookie sharing Technical failings Device defects Loss of network Misdelivery Password loopholes

Government Discovery (subpoena) Law enforcement (search warrant)

Internal Sources Employees and Consultants Employer Liability Copyright infringement Defamation and libel Discrimination Harassment Hostile work environment Obscenity Pornography Invasion of privacy Securities violations Trademark and trade secret violations Loss of employee productivity

Intruders Cookies Spam

ORGANIZATIONAL RISKS OF INTERNET USE IN BUSINESS To promote an understanding of the types of misconduct and the risks to which organizations are subject, this section presents example threats to Internet use in business. As shown in Exhibit 1, the sources of these threats are categorized as external, both external and internal, and internal to an organization. Each of these categories is not exclusive, but rather there is overlap among them. The purpose of the categorization is to reveal the motivation and implications resulting from each source. A summary of the sources and corresponding threats is presented in Exhibit 2. External Threats to Online Organizations Online organizations are subject to a diversity of threats from the external environment. Each of these sources may have differing motives but may nonetheless cause financial loss or reduced confidence in Internet-related business activities of the targeted organization. Competitors. Competitors and industry spies may seek to gain access to copyrights, intellectual property, trade secrets, and other proprietary 717

LEVERAGING E-BUSINESS OPPORTUNITIES information. Intelligence gathering has occurred for decades, including traditional spying methods such as browsing at a competitor’s store, posing as a customer, searching obscure public records, or counting deliveries at the loading dock. In what Microsoft Corp. called a deplorable act of corporate espionage, hackers gained access to its computer network using an e-mail account in Russia to steal passwords to the network. At a minimum, the hackers were able to view source code to recent versions of the Windows operating system and portions of its Office suite. Organizations must be on the alert to recognize the corresponding E-methods for spying. Consumers. The major concern of consumers focuses on privacy. Online businesses have increasingly sought to tailor Web site interaction and target promotional activities to individual consumers. This requires the collection of various types of data, preferably identified with a specific individual.

The use of cookies enables personal information to be easily obtained from Web users, often without their knowledge. In addition to click-stream data, cookies can collect the user’s IP address, the number and dates of prior visits, the type and version of browser and operating system, among other types of information. Further, users may be asked to provide registration information when visiting a Web site which, when combined with data collected through tracking technology, can be used to create an individual user profile. Subsequently, this information may be used to send unsolicited bulk e-mail, or spam. While primarily utilized for commercial purposes, spam may also promote political, malicious, or illegal schemes. Consumer reaction has ranged from outrage to filing lawsuits against major Internet companies for failure to disclose data collection practices or to comply with their own published privacy policy. Among the companies named in claims are Amazon.com, RealNetworks, and Yahoo!. In addition to the special regulations that recently went into effect, the Federal Trade Commission recently asserted its authority by bringing enforcement actions against Web sites for questionable data collection practices. Criminals: Hackers and Crackers. Legal commentators are divided over how Internet crime, or cybercrime, should be addressed. Cybercrime can either be viewed as traditional crime committed with computer resources or as a new category with unique considerations requiring a new legal framework. Emerging technologies are accompanied by emerging challenges such as perpetrator identification, intent and motivation, jurisdiction, and international cooperation. Although acting from outside an organization, disgruntled insiders are the primary perpetrators of Internet crime. The perpetrators, generally referred to as hackers or crackers, have cost U.S. businesses an estimated $10 billion annually according to the FBI, 718

A Strategic Response to the Broad Spectrum of Internet Abuse from various activities ranging from simple criminal trespass to sophisticated Web site defacing, distributed denial-of-service (DDoS) attacks, hacking and cracking, malicious code, and theft of proprietary information, resources, and services. Web site defacement entails unauthorized access to either a user’s account or the Webmaster’s password to download, alter, and upload a Web page. Perhaps the most brazen Web site defacement was directed against the Federal Bureau of Investigation, which investigates this criminal violation of federal law. DoS attacks direct numerous computers to send service requests to a targeted Web site. A rash of DoS attacks against prominent Web sites, including Amazon.com, eBay, and Yahoo!, occurred in February 2000. The servers at these targeted sites were so overwhelmed that the sites were unable to respond to legitimate requests, causing more than $1.2 billion in total losses. The estimated losses are based on each company’s lost revenues for site downtime, lost market capitalization due to plunging stock prices, and the cost for systems security upgrades. Future losses may result from a reduction of consumer confidence in E-commerce. Additionally, organizations and Internet service providers (ISPs) could be held liable for unwittingly allowing their computers to partake in the attacks. Hacking and cracking entails trespass for the challenge or thrill of gaining illegal entry, illicit financial gain, or malicious activities. According to a survey by the FBI, 55 percent of respondents reported malicious activity by disgruntled insiders. A former employee of Forbes, Inc. used a coworker’s account to vengefully cause five network servers to crash and erased the server volume on each. A two-day shutdown of Forbes’ New York operations resulted in losses exceeding $100,000. Malicious code is devised to cause damage or to steal information. There are an estimated 30,000 viruses in existence, with approximately 300 new viruses created each month. The most common forms are viruses, worms, and Trojan programs, designed to spread from one computer to others via executable code in e-mail or infected disks. The resulting damage can be quite costly. For example, the Melissa Macro virus caused an estimated $80 million, and the “ILOVEYOU” virus cost an estimated $10 billion in damage worldwide. Government. The government may gain access to Internet content through law enforcement activities or discovery processes. Law enforcement agencies investigating illegal activities may present a search warrant to search files or e-mail messages in transit, stored on disk or in paper form, backed up to tape, or even those that have been deleted and over-

719

LEVERAGING E-BUSINESS OPPORTUNITIES written. Similarly, through discovery, a subpoena may be issued. For example, e-mail messages written by Bill Gates were retrieved and used as evidence to support the Department of Justice’s antitrust lawsuit alleging that Microsoft used its Windows monopoly to unfairly crush Netscape Navigator. Antitrust experts commented that the messages constituted some of the most damaging evidence against Microsoft. Intruders. Organizations are subject to intrusions that are not illegal but may certainly be disruptive. Unlike criminals who gain illegal unauthorized entry, marketers and advertisers are able to intrude upon the privacy of employees through cookies and spam. Organizations may question the methods and types of information collection and the use of that information. Further, spam can overload a company’s server and cause it to slow or crash.

Both External and Internal Threats to Online Organizations Threats to Internet content may straddle the external and internal environment through sources such as business partners and technical failure. Business partnering may include formal and informal alliances established for a joint project. Technological glitches can expose organizational content to perusal by unintended sources. Business Partners. Organizations are less likely to view business partners as a threat, in contrast to other sources such as competitors. However, the interests of a partner organization lie primarily with furthering its own goals. An organization may be liable not only for the actions of its own employees, but also for those of partner employees. The risks from a source that is both external and internal can become more complicated.

Global partnering, for example, requires compliance with a multitude of trade laws worldwide. Web sites must comply with the European Union’s privacy laws, which restrict the collection of personal data. Other areas of global restrictions include consumer protection laws, advertising restrictions, and the international equivalents of the U.S. Food and Drug Administration. A risk unique to Web site partnering is cookie sharing — the practice of collecting and consolidating information gathered from various Web sites. Another form of cookie sharing is achieved by multiple cookie creators partnering to share one Web site. Cookies can be sent to a user from a domain other than the site the user visited. For example, an advertising agency could post its clients’ banner ads from a central server and include cookies to track the activities of users receiving the ad. The user is unlikely to know what site created the cookie and therefore less likely to know the intended use of the user profile obtained. 720

A Strategic Response to the Broad Spectrum of Internet Abuse Technical Failure. Internet content may be at risk due to technical failure, including device defects, loss of network integrity and availability, misdelivery of e-mail messages or files, and password protection loopholes. In managing resources, an internal system administrator can, for example, monitor employees’ files or e-mail, as was the case at Epson America, Inc. This interception was by an internal employee, but the content examined could be outside the employee’s realm of responsibility. Similarly, an ISP can intercept communications without liability as long as it is necessary to provide services or to protect property — again relinquishing content, but this time to someone external to the organization. Message delivery misdirection resulting from a technical glitch may also reveal content to unintended recipients.

Internal Threats to Online Organizations Those within an organization, including consultants and employees, may inadvertently or intentionally cause their employer shared liability for their actions. The most common concerns for liability on the Internet are copyright infringement, pornography, and defamation.5 We consider a wider array, including copyright infringement, defamation and libel; discrimination; harassment, hostile work environment, obscenity, pornography, invasion of privacy, violations of securities laws, and violations of trademark and trade secret laws. Although these areas illustrate potential liability on the part of employers for employee Internet activities in the workplace, the legal precedents regulating this arena are still evolving. Copyright Infringement. The copyright laws of the United States, intended to balance the rights of users and creators, include protection of the various materials found on the Internet.6 The ease with which electronic material can be copied has resulted in a rapid increase in copyright infringement on the Internet. Copyright infringement committed by an employee may result in employer liability, even if the employer did not perform the copying or distributing.

Employer liability can result from what may seem to be innocent activities. An employee may have brought in an individually licensed copy of software from home, copied software from the Web, or cut and pasted clipart from other Web sites. For example, the Webmaster of the National Association of Fire Equipment Distributors used copyrighted clipart, obtained from three CD-ROM volumes, to decorate the trade organization’s Web site. In the ensuing lawsuit, Marobie–Fl. Inc. d/b/a Galactic Software v. National Association of Fire Equipment Distributors et al.,7 the trade organization was held liable for copyright infringement. The bottom line is that if an employer has possession of improperly obtained materials for which valid purchase receipts cannot be provided, it can be charged with copyright infringement. 721

LEVERAGING E-BUSINESS OPPORTUNITIES Defamation and Libel. The various forms of Internet communication have given rise to employee cyber-smearing or cyber-venting, the virtual equivalent of casual conversations around the water cooler. Unwitting or disgruntled employees have utilized bulletin boards, chat rooms, e-mail, and Web sites to anonymously vent opinions, concerns, frustration, or anger about the workplace. For most large companies, at least one Web site is available; while for others, such as Microsoft, there are several.

The common law tort of defamation is intended to protect an individual’s interest in his own reputation. Defamation can be difficult to prove for a public company. Postings must contain false statements, not only an opinion; must be made knowingly and recklessly; and must hurt the company. In what has been described as a case of cyber-venting gone too far, a former engineer of Intel Corp. created a Web site as a critical forum for a group called Former and Current Employees of Intel. Intel has not filed a defamation charge for the site. Rather, a restraining order was placed against the former employee in response to unsolicited mass e-mails sent to as many as 30,000 Intel employees. Intel chose to effectively cease the offender’s activities rather than pursue a lengthy case, drawing more attention to that which they sought to stop. Frequently, the intent of a lawsuit is not about defamation, but rather to reveal the names of anonymous detractors. For example, the Raytheon Co. suspected postings to a Yahoo! finance message board were made by an employee. In the ensuing lawsuit, Raytheon obtained subpoenas against Yahoo! and other Internet services to learn the identity of all 21 aliases, most of whom were employees, and then dismissed the suit. Four employees subsequently resigned; others entered corporate counseling. Discrimination, Harassment, Hostile Work Environment, Obscenity, and Pornography. According to the Communications Decency Act of 1996, employ-

ers are not liable for obscene or harassing use of electronic telecommunications by their employees unless the conduct is within the scope of employment and the employer (1) had knowledge of, and authorized, the conduct or (2) recklessly disregarded the conduct.8 Under harassment law, an employee may sue for damages based on a “hostile environment,” a vague term including an array of offensive elements, such as jokes, chat, pinups, images, and even co-workers gathered around a screen making sexist or racially insensitive remarks. Unfortunately, instances of such Internet abuse abound in the workplace. For example, two African-American employees of Morgan Stanley Dean Witter filed a $36 million class action suit, later settled, claiming they suffered emotional and physical stress from e-mail messages containing racist jokes. Four female employees received a settlement of $2.2 million from Chevron Corporation for sexually harassing e-mail. Compaq Computer Corp. 722

A Strategic Response to the Broad Spectrum of Internet Abuse fired 20 employees after they downloaded sexually explicit images from Web sites, logging over 1000 hits apiece, and distributed the images via e-mail. The New York Times Co. fired 23 employees for distributing pornographic images through e-mail. Invasion of Employee Privacy. Employers bear the responsibility for managing organizational resources appropriately. In response, increasing numbers of organizations are monitoring Internet activities. Among the reasons for monitoring are reduced employee productivity, decreased bandwidth, corporate espionage, and legal liability. Nearly three quarters of major U.S. companies responding to a survey review some form of their employees’ communications including Internet connections, e-mail, computer files, or telephone calls.9 Of all surveillance methods, Internet and e-mail monitoring have seen the most explosive growth, with 54.1 percent of companies now monitoring employees’ Internet connections and 38.1 percent reviewing e-mail messages. However, these actions may conflict with legitimate employee privacy expectations in the workplace. Violations of Securities Laws. Provisions of the Securities Exchange Act (SEC) of 1934 prohibit the manipulation of stock prices through false or misleading communications. With respect to the Internet, the SEC has taken action in cases of phony Internet message board postings. One incident was perpetrated anonymously by an employee of a publicly traded company, PairGain, targeted in his hyperlink posted to a Yahoo! finance message board. The link, which stated, “BUYOUT NEWS!!! ECILF is buying PAIR ... Just found it on Bloomberg ...” presented an authentic-looking spoof of Bloomberg L.P.’s news site. The stock price of PairGain rose nearly 31 percent before the markets settled. PairGain cooperated fully during the investigation and was never implicated. Nonetheless, the company was subjected to the disruption of the investigation and was undoubtedly concerned about potential liabilities.

Additional concerns include inaccurate disclosures made unwittingly by employees of public companies participating in Internet-based discussions. Such statements, albeit inadvertent, violate the general antifraud provisions of the Securities Exchange Act.10 Violations of Trademark and Trade Secret Laws. Similar to copyright infringement, employers may also be held liable for their employees’ violations of trademark and trade secret laws. If an employee were to post the trademark of another organization on his employer’s Web site and, when informed, the employer did not take action to correct the infringement, the employer could be held liable. The same rationale applies to an employee’s misuse of trade secrets. If an employee used the employer’s resources to obtain another organization’s proprietary information, such as a customer list or software code, the employer could be liable. 723

LEVERAGING E-BUSINESS OPPORTUNITIES Exhibit 3. Potential Losses Resulting from Decreased Employee Productivity Factors

Result

Number of hours per day each employee spends on personal business Number of workdays per year Average hourly rate including overhead expenses Annual cost of lost productivity per employee

1 240 $40 $9,600

Source: Dean and Carey, 2000, http:/www.idc.com.

Loss of Employee Productivity and Internet Resource Use. For some companies, the concern is not what their employees are doing on the Internet, but rather the time they spend. For example, at Xerox-PARC, 40 employees were fired for spending as much as eight hours a day visiting inappropriate Web sites. Ernst & Young reported some firms calculated that more than 80 percent of their Internet capacity was used to access non-business-related Web sites. To gain insight into workplace surfing, a survey revealed only 9.6 percent of respondents never surf non-work-related sites, while 12.6 percent admitted surfing over two hours.11 Employee Web surfing can represent lost productivity, especially when coupled with non-workrelated e-mails. About half (51.5 percent) of respondents to the same survey reported receiving one to five non-work-related e-mails, on average, during the workday. Over half (56.3 percent) send one to five e-mails. Together, these activities could represent an estimated cost of $9,600 per employee per year, as shown in Exhibit 3.12

STRATEGIC RESPONSE: THE EXPANDED ROLE OF CHIEF PRIVACY AND INTEGRITY OFFICER It is evident that as more people gain access to the Internet, the numerous potential cyber-liabilities confronting online organizations will continue to increase. The burden of repercussions from misuse is placed squarely on organizations, which hold the ultimate responsibility for use of organizational resources by their employees. Interestingly, liability may even extend to inadequate site security, resulting in unwitting participation in Internet abuses such as DDoS attacks, as previously discussed. In response, organizations may place the responsibility of overseeing Internet activity on the existing legal department or on one or more chosen individuals, depending on the responsibilities to be assumed. A survey of 66 Fortune 500 companies with CPOs, conducted by PricewaterhouseCoopers, found the CPO to be part of the legal department in 50 percent of the surveyed companies.13 Only 8 percent reported the formation of a separate privacy department headed by the CPO. 724

A Strategic Response to the Broad Spectrum of Internet Abuse To effectively coordinate proactive and reactive organizationwide action to potential consequences, we recommend expanding the role of the existing CPO to include Internet integrity. Responsibilities would be expanded from focusing on consumer privacy to ensuring the integrity of (1) Internet content, including Web sites, files, e-mail, and communications; and (2) Internet use, including the direct activities of internal sources and the consequences of external sources. Thus, this emergent role would more appropriately be titled Chief Privacy and Integrity Officer (CPIO) to protect organizational resources while maximizing the use of the Internet. Based on the reported areas of responsibility of existing CPOs, none currently include this proposed expansion of responsibilities from a focus on privacy issues to the broadened perspective of Internet integrity. Certainly this expanded set of responsibilities should be accompanied by an expanded set of qualifications. A legal specialization in consumer privacy would be necessary to ensure organizational adherence to government regulations, industry selfregulation, and organizational initiatives associated with consumer privacy. Proactively attending to the broad spectrum of Internet activities and potential unintended consequences requires both an expanded legal background, including online content liability and the interpretation of laws and regulations as applied to the Internet, and an extensive technical understanding to minimize the occurrence of threatening Internet activities. Specifically, the newly defined CPIO would be responsible for formulating or reformulating an expressed Internet use policy, undertaking ongoing training and other means to maintain awareness of issues, monitoring internal sources, implementing defenses against external sources, and securing adequate liability insurance. The effectiveness of this new role in overseeing these responsibilities would be determined by assessing current operations, implementing proactive measures to reduce potential misuse, and continuously keeping abreast of technological advances, legislative and regulatory initiatives, and new areas of vulnerability. Internet Use Policy The Internet use policy should clearly state what is acceptable and unacceptable use of specific Internet technologies, by whom, at what times, for what duration, and for what purposes. Details about personal use, prohibited use, and access to prohibited materials should be comprehensively stated. Privacy rights, if any, for allowable personal use should be defined to clarify employee expectations. The various types of use prohibited under all circumstances should be enumerated, such as criminal, disruptive, offensive, harassing, or otherwise unethical use. Access to prohibited materials such as that which is copyrighted, licensed, other intellectual property, sensitive company materials, or otherwise illegal should be 725

LEVERAGING E-BUSINESS OPPORTUNITIES explicitly condemned. Procedures for identifying violations must be communicated. The consequences of policy violations should be stated. Training The development of a formal Internet use policy can promote improved use, especially if reinforced by communicating that policy and conducting education, training, and retraining sessions. Other means to maintain awareness of issues may be employed, such as presenting the Internet use policy each time an employee logs in or presenting a pop-up reminder when certain system facilities are accessed. Employee awareness of appropriate Internet use could thereby be kept up to date, even as technological advances and associated new means of abuse occur. Monitoring Internal Sources Because it is the employer who is held responsible for employee abuse of the Internet, monitoring or reserving the right to monitor is a necessary means to appropriately manage resources. Generally, U.S. federal law allows employee monitoring for business purposes if the employees have been made aware of the extent of monitoring. However, in implementing monitoring programs, applicable statutory, regulatory, and common law requirements intended to protect employees’ privacy interests must be considered. If undertaken, such monitoring may reveal the need for filtering software to block employee access to inappropriate Web sites, chat rooms, message boards, and Usenet news groups. Defenses against External Sources Automated defenses should be employed against external sources, such as intrusion detection systems, firewalls, virus protection programs, security patches for operating systems, and encryption and authentication products. Procedures to update these defenses should be established to minimize the adverse effect of technological advances rendering current versions obsolete. Liability Insurance Organizations may reduce the financial exposure of liability by purchasing insurance. Among the most costly and common risks to online organizations are business interruptions caused by hackers, viruses, and internal saboteurs; litigation costs and settlements for inappropriate employee email and Internet use; failure of products or services to perform as advertised on the Internet; copyright and trademark lawsuits; and patentinfringement claims. The insurance industry has responded by providing various products directed toward E-business. 726

A Strategic Response to the Broad Spectrum of Internet Abuse CONCLUSION With the increase in Internet-related business activities, organizations are confronted with a multitude of new risks resulting from both external and internal sources of threats. In light of the potential for adverse impacts on companies, it is imperative that a coordinated and comprehensive response be formulated. The creation of the CPO position has already been done within major organizations. However, the organizational risks extend well beyond privacy concerns. The scope of responsibilities must be broadened to include Internet integrity issues. Therefore, we propose that the CPO position should be expanded, becoming the Chief Privacy and Integrity Officer. Organizations will thereby be enabled to more comprehensively protect themselves, their customers, and their business partners as the use and reliance on the Internet continues to grow. References 1. Nua Internet Surveys (September 2001). http://www.nua. 2. Nielsen NetRatings (September 2001). http://209.249.142.29/nnpm/owa/ Nrpublicreports.usagemonthly. 3. Kuykendall, L. (August 28, 2001). Privacy officers say role keeps on growing. Am. Banker. 4. Cohen, Suzanne (July 2001). Chief privacy officers. Risk Manage. 5. Goldstein, M.P. (Spring 2000). Service provider liability for acts committed by users: what you don’t know can hurt you. John Marshall J. Comp. Inf. Law. 6. Copyright Act, 17 U.S.C. § 102 (1994) (defining copyrightable subject matter). 7. Marobie–Fl. Inc. d/b/a Galactic Software v. National Association of Fire Equipment Distributors et al., No. 96-C-2966 (N.D. Ill., July 31, 2000). 8. Communications Decency Act (CDA) of 1996, 47 U.S.C. §223 (e) (4) (1998). 9. American Management Association (2000). A 2000 AMA Survey: Workplace Testing, Monitoring, and Surveillance, http://www.amanet.org. 10. Securities Exchange Act of 1934. 11. Vault.com (2000). Results of Vault.com Survey of Internet Use in the Workplace. http://vault.com. 12. Dean, R. and A. Carey (2000). Executive Insights on Content Security, International Data Corporation, http://www.idc.com. 13. Young, D. (October 1, 2001). Privacy outlaws. Wireless Rev.

727

This page intentionally left blank

Chapter 58

World Wide Web Application Security Sean Scanlon Designing, implementing, and administering application security architectures that address and resolve user identification, authentication, and data access controls has become increasingly challenging as technologies transition from a mainframe architecture, to the multiple-tier client/server models, to the newest World Wide Web-based application configurations. Within the mainframe environment, software access control utilities are typically controlled by one or more security officers, who add, change, and delete rules to accommodate the organization’s policy compliance. Within the n-tier client/server architecture, security officers or business application administrators typically share the responsibility for any number of mechanisms, to ensure the implementation and maintenance of controls. In the Web application environment, however, the application user is co-owner of the administration process. This chapter provides a detailed project management template for developing an infrastructure for enterprisewide Web application security. HISTORY OF WEB APPLICATIONS: THE NEED FOR CONTROLS During the past decade or so, companies spent a great deal of time and effort building critical business applications utilizing client/server architectures. These applications were usually distributed to a set of controlled, internal users, usually accessed through internal company resources or dedicated, secured remote access solutions. Because of the limited set of users and respective privileges, security was built into the applications or provided by third-party utilities that were integrated with the application. Because of the centralized and limited nature of these applications, management of these solutions was handled by application administrators or a central IT security organization. Now fast-forward to current trends, where the Web and Internet technologies are quickly becoming a key component for companies’ critical busi0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

729

LEVERAGING E-BUSINESS OPPORTUNITIES Exhibit 1.

Considerations for Large Web-Based Application Development

• Authenticating and securing multiple applications, sometimes numbering in the hundreds • Securing access to applications that access multiple systems, including legacy databases and applications • Providing personalized Web content to users • Providing single sign-on access to users accessing multiple applications, enhancing the user experience • Supporting hundreds, thousands, and even millions of users • Minimizing the burden on central IT staffs and facilitating administration of user accounts and privileges • Allowing new customers to securely sign-up quickly and easily without requiring phone calls • Scalability to support millions of users and transactions and the ability to grow to support unforeseen demand • Flexibility to support new technologies while leveraging existing resources like legacy applications, directory servers, and other forms of user identification • Integration with existing security solutions and other Internet security components

ness applications (see Exhibit 1). Companies are leveraging the Web to enhance communications with customers, vendors, subcontractors, suppliers, and partners, as well as utilizing technologies to reach new audiences and markets. But the same technologies that make the Web such an innovative platform for enhancing communication also dictate the necessity for detailed security planning. The Web has opened up communication to anyone in the world with a computer and a phone line. But the danger is that along with facilitating communication with new markets, customers, and vendors, there is the potential that anyone with a computer and phone line could now access information intended only for a select few. For companies that have only a few small applications that are accessed by a small set of controlled users, the situation is fairly straightforward. Developers of each application can quickly use directory- or filelevel security; if more granular security is required, the developers can embed security in each application housing user information and privileges in a security database. Again, within this scenario, management of a small set of users is less time-consuming and can be handled by a customer service group or the IT security department. However, most companies are building large Web solutions, many times providing front-end applications to multiple legacy systems on the back end. These applications are accessed by a diverse and very large population of users, both internal and external to the organization. In these instances, one 730

World Wide Web Application Security

Exhibit 2. Internet Security Architecture

must move to a different mind-set to support log-on administration and access controls for hundreds, thousands, and potentially millions of users. A modified paradigm for security is now a requirement for Web applications: accommodating larger numbers of users in a very noninvasive way. The importance of securing data has not changed; a sure way to lose customers is to have faulty security practices that allow customer information to be accessed by unauthorized outside parties. Further, malicious hackers can access company secrets and critical business data, potentially ruining a company’s reputation. However, the new security challenge for organizations now becomes one of transitioning to electronic business by leveraging the Web, obtaining and retaining external constituents in the most customer-intimate and customer-friendly way, while maintaining the requirement for granular access controls and “least privilege.” HOW WEB APPLICATION SECURITY FITS INTO AN OVERALL INTERNET SECURITY STRATEGY Brief Overall Description Building a secure user management infrastructure is just one component of a complete Internet Security Architecture. While a discussion of a complete Internet Security Architecture (including network security) is beyond the scope of this chapter, it is important to understand the role played by a secure user management infrastructure. Exhibit 2 provides a general overview of an Internet security architecture and the components that a secure user management infrastructure can help address. 731

LEVERAGING E-BUSINESS OPPORTUNITIES

Exhibit 3. Authentication Time Chart

Authentication A wide range of authentication mechanisms are available for Web systems and applications. As the Internet matures, more complex and mature techniques will evolve (see Exhibit 3). With home-grown developed security solutions, this will potentially require rewriting applications and complicated migrations to new authentication techniques as they become available. The implementation of a centralized user management architecture can help companies simplify the migration of new authentication techniques by removing the authentication of users from the Internet applications. As new techniques emerge, changes can be made to the user management infrastructure, while the applications themselves would not need major updates, or updates at all. WHY A WEB APPLICATION AUTHENTICATION/ACCESS CONTROL ARCHITECTURE? Before deciding whether or not it is necessary to implement a centralized authentication and access control architecture, it is helpful to compare the differences between developing user management solutions for each application and building a centralized infrastructure that is utilized by multiple applications. 732

World Wide Web Application Security Characteristics of decentralized authentication and access control include: • Low initial costs • Quick to develop and implement for small-scale projects • Each application requires its own security solution (developers must build security into each new application) • User accounts are required for each application • User must log in separately to each application • Accounts for users must be managed in multiple databases or directories • Privileges must be managed across multiple databases or directories • Inconsistent approach, as well as a lower security level, because common tasks are often done differently across multiple applications • Each system requires its own management procedures increasing administration costs and efforts • Custom solutions may not be scalable as users and transactions increase • Custom solutions may not be flexible enough to support new technologies and security identification schemes • May utilize an existing directory services infrastructure Characteristics of centralization authentication and access control include: • Higher start-up costs • More upfront planning and design required • A centralized security infrastructure is utilized across multiple applications and multiple Web server platforms • A single account can be used for multiple applications • Users can log in one time and access multiple applications • Accounts for multiple applications can be managed in a single directory; administration of accounts can easily be distributed to customer service organizations • Privileges can be managed centrally and leveraged over multiple applications • Consistent approach to security; standards are easily developed and managed by a central group and then implemented in applications • Developers can focus on creating applications without having to focus on building security into each application • Scalable systems can be built to support new applications, which can leverage the existing infrastructure • Most centralized solutions are flexible enough to support new technologies; as new technologies and security identification schemes are introduced, they can be implemented independent of applications 733

LEVERAGING E-BUSINESS OPPORTUNITIES PROJECT OVERVIEW Purpose Because of the diverse nature of users, data, systems, and applications that can potentially be supported by the centralized user management infrastructure, it is important to ensure that detailed requirements and project plans are developed prior to product selection and implementation (see Exhibit 4). Upfront planning will help ensure that all business and technical requirements are identified and prioritized, potentially helping prevent serious schedule issues and cost overruns. PROJECT PLANNING AND INITIATION Project Components There are three key components that make up developing an enterprisewide Web security user management infrastructure (see Exhibit 5). While there is significant overlap between components, and each component will affect how the other components will be designed, breaking the project into components makes it more manageable. Infrastructure. The infrastructure component involves defining the back-end networking and server components of the user management infrastructure, and how that infrastructure integrates into overall Web and legacy data system architecture. Directory Services. The directory services component involves defining where the user information will be stored, what type of information will be stored, and how that information will be synchronized with other data systems. Administration Tools. The administration tools component defines the processes and procedures that will be used to manage user information, delegation of administration, and business processes and rules. The administration tools component also involves developing the tools that are used to manage and maintain information.

Roles and Responsibilities Security. The security department is responsible for ensuring that the requirements meet the overall company security policies and practices. Security should also work closely with the business to help them identify business security requirements. Processes and procedures should be updated in support of the new architecture. Business. The business is responsible for identifying the business requirements associated with the applications. 734

World Wide Web Application Security Exhibit 4. Project Phases Phase

Tasks

Project planning and initiation

Develop project scope and objectives Outline resources required for requirements and design phase Roles and responsibilities Develop business requirements Develop technical requirements Develop risk assessment Develop contingency plans Prioritize requirements and set selection criteria Roles and responsibilities Decide on centralized versus decentralized strategy Make or buy Product evaluation and testing Product selection License procurement Server architecture Network architecture Directory services Directory services strategy Architecture Schema Development environment standards Administrative responsibilities Account Infrastructure Administrative tools development Server Implementation Directory services implementation Integration Functionality Performance Scalability and failover Testing strategies Pilot test Ongoing support

Requirements

Product strategy and selection

Design

Implementation

Testing

Post-implementation

Application Developers. Application developers are responsible for identifying tool sets currently in place, information storage requirements, and other requirements associated with the development of the applications that will utilize the infrastructure. Infrastructure Components. It is very important that the infrastructure and networking groups are involved: the infrastructure group will provide support for the hardware, Web servers, and directory services; the net-

735

LEVERAGING E-BUSINESS OPPORTUNITIES

Exhibit 5. Web Security User Management Components

working group will ensure that the necessary network connections and bandwidth are available. REQUIREMENTS Define Business Requirements Before evaluating the need for selecting and implementing a centralized security authentication infrastructure, it is critical to ensure that all business requirements are thoroughly identified and prioritized. This process is no different than building the business and security requirements for client/server and Internet applications. Identifying the business requirements will help identify the following key issues: 1. What existing security policies and processes are in place? 2. Is the cost of implementing a single centralized infrastructure warranted, or is it acceptable to implement decentralized security in each application? 3. What data and systems will users be accessing? What is the confidentiality of the data and systems being accessed? 4. What are the business security requirements for the data and systems being accessed? Are there regulations and legal issues regarding the information that dictate specific technologies or processes? 5. What type of applications will require security? Will users be accessing more than one application? Should they be allowed single signon access? 6. What type of auditing is required? Is it permissible to track user movements within the Web site? 7. Is user personalization required? 8. Is self-registration necessary, or are users required to contact a customer service organization to request a name and password? 736

World Wide Web Application Security 9. Who will be responsible for administering privileges? Are there different administration requirements for different user groups? 10. What are the projected numbers of users? 11. Are there password management requirements? 12. Who will be accessing applications/data? Where are these users located? This information should be broken down into groups and categories if possible. 13. What are the various roles of people accessing the data? Roles define the application/data privileges users will have. 14. What is the timeframe and schedules for the applications that the infrastructure will support? 15. What are the cost constraints? Define Technical Requirements After defining the business requirements, it is important to understand the existing technical environment and requirements. This will help determine the size and scope of the solution required, what platforms need to be supported, and the development tools that need to be supported by the solution. Identifying the technical requirements will help identify the following key issues: 1. What legacy systems need to be accessed? 2. What platforms need to be supported? 3. Is there an existing directory services infrastructure in place, or does a new one need to be implemented? 4. What Web development tools are utilized for applications? 5. What are the projected number of users and transactions? 6. How granular should access control be? Can users access an entire Web site, or is specific security required for single pages, buttons, objects, and text? 7. What security identification techniques are required: account/password, biometrics, certificates, etc.? Will new techniques be migrated to as they are introduced? 8. Is new equipment required? Can it be supported? 9. What standards need to be supported? 10. Will existing applications be migrated to the new infrastructure, including client/server and legacy applications? 11. What are the cost constraints? Risk Assessment Risk assessment is an important part of determining the key security requirements (see Exhibit 6). While doing a detailed analysis of a security 737

LEVERAGING E-BUSINESS OPPORTUNITIES Exhibit 6.

Risk Assessment

• What needs to be protected? — Data — Systems • Who are the potential threats? — Internal — External — Unknown • What are the potential impacts of a security compromise? — Financial — Legal — Regulatory — Reputation • What are the realistic chances of the event occurring? — Attempt to determine the realistic chance of the event occurring — Verify that all requirements were identified

risk assessment is beyond the scope of this chapter, it is important to understand some of the key analyses that need to be done. The benefits of risk assessment include ensuring that one does not spend hundreds of thousands of dollars to protect information that has little financial worth, as well as ensuring that a potential security compromise that could cause millions of dollars worth of damage, in both hard dollars and reputation, does not occur because one did not spend what in hindsight is an insignificant investment. The most difficult part of developing the risk assessment is determining the potential impacts and the realistic chances of the event occurring. In some cases, it is very easy to identify the financial impacts, but careful analysis must be done to determine the potential legal, regulatory, and reputation impacts. While a security breach may not have a direct financial impact if user information is publicized on the front page of the business section, the damage caused to one’s reputation and the effect that has on attracting new users could be devastating. Sometimes, it can be very difficult to identify the potential chance of a breach occurring. Threats can come from many unforeseen directions and new attacks are constantly being developed. Steps should be taken to ensure that detailed processes, including monitoring and reviews of audit logs, are done on a regular basis. This can be helpful in identifying existing or potential threats and analyzing their chance of occurrence. Analysis of threats, new and existing, should be performed routinely. 738

World Wide Web Application Security Prioritization and Selection Criteria After defining the business and technical requirements, it is important to ensure that the priorities are discussed and agreed upon. Each group should completely understand the priorities and requirements of the other groups. In many cases, requirements may be developed that are nice to have, but are not a priority for implementing the infrastructure. One question that should be asked is: is one willing to delay implementation for an extended amount of time to implement that requirement? For example, would the business group wait an extra six months to deliver the application so that it is personalized to the user, or are they willing to implement an initial version of the Web site and upgrade it in the future? By clearly understanding the priorities, developing selection criteria will be much easier and products can be separated and evaluated based on how well they meet key criteria and requirements. Selection criteria should be based on the requirements identified and the priorities of all parties involved. A weight should be given to each selection criterion; as products are analyzed, a rating can be given to each selection criterion and then multiplied against the weight. While one product may meet more requirements, one may find that it does not meet the most important selection criterion and, therefore, is not the proper selection. It is also important to revisit the requirements and their priorities on a regular basis. If the business requirements change in the middle of production, it is important to understand those changes and evaluate whether or not the project is still moving in the right direction or whether modifications need to be made. PRODUCT STRATEGY AND SELECTION Selecting the Right Architecture Selecting the right infrastructure includes determining whether centralized or decentralized architecture is more appropriate and whether to develop the solution in-house or purchase/implement a third-party solution. Centralized or Decentralized. Before determining whether to make or buy, it is first important to understand if a centralized or decentralized infrastructure meets the organization’s needs (see Exhibit 7). Based on the requirements and priorities identified above, it should become obvious as to whether or not the organization should implement a centralized or decentralized architecture. A general rule of thumb can be identified. Make or Buy. If one has determined that a centralized architecture is required to meet one’s needs, then it is realistic to expect that one will be 739

LEVERAGING E-BUSINESS OPPORTUNITIES Exhibit 7.

Centralized or Decentralized Characteristics

Centralized

Decentralized

Multiple applications Supports large number of users Single sign-on access required Multiple authentication techniques Large-scale growth projected Decentralized administration Detailed audit requirements

Cost is a major issue Small number of applications One authentication technique Minimal audit requirements Manageable growth projected Minimal administration requirements

purchasing and implementing a third-party solution. For large-scale Web sites, the costs associated with developing and maintaining a robust and scalable user management infrastructure quickly surpass the costs associated with purchasing, installing, and maintaining a third-party solution. If it has been determined that a decentralized architecture is more appropriate, it is realistic to expect that one will be developing one’s own security solutions for each Web application, or implementing a third-party solution on a small scale, without the planning and resources required to implement an enterprisewide solution. Product Evaluation and Testing. Having made a decision to move forward with buying a third-party solution, now the real fun begins — ensuring that one selects the best product that will meet one’s needs, and that can be implemented according to one’s schedule.

Before beginning product evaluation and testing, review the requirements, prioritization, and selection criteria to ensure that they accurately reflect the organization’s needs. A major determination when doing product evaluation and testing is to define the following: • What are the time constraints involved with implementing the solution? Are there time constraints involved? If so, that may limit the number of tools that one can evaluate or select products based on vendor demonstrations, product reviews, and customer references. Time constraints will also identify how long and detailed one can evaluate each product. It is important to understand that implementing a centralized architecture can be a time-consuming process and, therefore, detailed testing may not be possible. Top priorities should be focused on, with the evaluation of lower priorities based on vendor demonstrations and other resources. • Is there an in-house solution already in place? If there is an in-house solution in place, or a directory services infrastructure that can be leveraged, this can help facilitate testing. 740

World Wide Web Application Security • Is hands-on testing required? If one is looking at building a large-scale solution supporting millions of users and transactions, one will probably want to spend some time installing and testing at least one tool prior to making a selection. • Are equipment and resources available? While one might like to do detailed testing and evaluation, it is important to identify and locate the appropriate resources. Hands-on testing may require bringing in outside consulting or contract resources to perform adequate tests. In many cases, it may be necessary to purchase equipment to perform the testing; and if simultaneous testing of multiple tools is going to occur, then each product should be installed separately. Key points to doing product evaluation and testing include: • To help facilitate installation and ensure proper installation, either the vendor or a service organization familiar with the product should be engaged. This will help minimize the lead time associated with installing and configuring the product. • Team meetings, with participants from Systems Development, Information Security and Computer Resources, should occur on a regular basis so that issues can be quickly identified and resolved by all stakeholders. • If multiple products are being evaluated, each product should be evaluated separately and then compared against the other products. While one may find that both products meet a requirement, it may be that one product meets it better. Product Selection. Product selection involves making a final selection of a product. A detailed summary report with recommendations should be created. The summary report should include:

• • • • • • • • •

Business requirements overview Technical requirements overview Risk assessment overview Prioritization of requirements Selection criteria Evaluation process overview Results of evaluation and testing Risks associated with selection Recommendations for moving forward

At this point, one should begin paying special attention to the risks associated with moving forward with the selected product and begin identifying contingency plans that need to be developed. License Procurement. While selecting a product, it is important to understand the costs associated with implementing that product. If there are 741

LEVERAGING E-BUSINESS OPPORTUNITIES severe budget constraints, this may have a major impact on the products that can be implemented. Issues associated with purchasing the product include: 1. How many licenses are needed? This should be broken out by timeframes: immediate (3 months), short term (6 to 12 months), and long term (12 months+). 2. How is the product licensed? Is it a per-user license, a site license? Are transaction fees involved? What are the maintenance costs of the licenses? Is there a yearly subscription fee for the software? 3. How are the components licensed? Is it necessary to purchase server licenses as well as user licenses? Are additional components required for the functionality required by the infrastructure? 4. If a directory is being implemented, can that be licensed as part of the purchase of the secure user management product? Are there limitations on how that directory can be used? 5. What type of, if any, implementation services are included in the price of the software? What are the rates for implementation services? 6. What type of technical support is included in the price of the software? Are there additional fees for the ongoing technical support that will be required to successfully maintain the product? DESIGN The requirements built for the product selection should be reevaluated at this stage, especially the technical requirements, to ensure that they are still valid. At this stage, it may be necessary to obtain design assistance from the vendor or one of its partner service organizations to ensure that the infrastructure is designed properly and will meet both immediate and future usage requirements. The design phase can be broken into the following components. Server Infrastructure The server infrastructure should be the first component analyzed. • What is the existing server infrastructure for the Internet/intranet architecture? • What components are required for the product? Do client agents need to be installed on the Web servers, directory servers, or other servers that will utilize the infrastructure? • What servers are required? Are separate servers required for each component? Are multiple servers required for each component? • What are the server sizing requirements? The vendor should be able to provide modeling tools and sizing requirements. 742

World Wide Web Application Security • What are the failover and redundancy requirements? What are the failover and redundancy capabilities of the application? • What are the security requirements for the information stored in the directory/databases used by the application? Network The network should next be analyzed. • What are the network and bandwidth requirements for the secure user management infrastructure? • What is the existing Internet/intranet network design? Where are the firewalls located? Are traffic load balancers or other redundancy solutions in place? • If the Internet servers are hosted remotely, what are the bandwidth capabilities between the remote site and one’s internal data center? Directory Services The building of a complete directory infrastructure in support of a centralized architecture is beyond the scope of this chapter. It is important to note that the directory services are the heart and soul of one’s centralized architecture. The directory service is responsible for storing user-related information, groups, rights and privileges, and any potential personalization information. Here is an overview of the steps that need to be addressed at this juncture. Directory Services Strategy.

• What is the projected number of users? • The projected number of users will have a major impact on the selection of a directory solution. One should break projections into timeframes: 1 month, 6 months, 1 year, and 2 years. • Is there an existing directory service in place that can be utilized? • Does the organization have an existing directory service that can be leveraged? Will this solution scale to meet long-term user projections? If not, can it be used in the short term while a long-term solution is being implemented? • What type of authentication schemes will be utilized? • Determining the type of authentication schemes to be utilized will help identify the type of directory service required. Directory Schema Design.

• What type of information needs to be stored? • What are the namespace design considerations? 743

LEVERAGING E-BUSINESS OPPORTUNITIES • Is only basic user account information being stored, or is additional information, like personal user information and customization features, required? • What are the administration requirements? • What are the account creation and maintenance requirements? Development Environment Building a development environment for software development and testing involves development standards. Development Standards. To take advantage of a centralized architecture, it is necessary to build development security processes and development standards. This will facilitate the design of security into applications and the development of applications (see Exhibit 8). The development security process should focus on helping the business and development team design the security required for each application. Exhibit 8 is a sample process created to help facilitate the design of security requirements for Webbased applications utilizing a centralized authentication tool.

Administrative Responsibilities There are multiple components of administration for a secure user management infrastructure. There is administration of the users and groups that will be authenticated by the infrastructure; there is administration of the user management infrastructure itself; and there is the data security administration that is used to develop and implement the policies and rules used to protect information. Account Administration. Understanding the administration of accounts and user information is very important in developing the directory services architecture. The hierarchy and organization of the directory will resemble how the management of users is delegated.

If self-administration and registration are required, this will impact the development of administrative tools. Infrastructure Administration. As with the implementation of any enterprisewide solution, it is very important to understand the various infrastructure components, and how those will be administered, monitored, and maintained. With the Web globalizing applications and being “always on,” the user management infrastructure will be the front door to many of the applications and commerce solutions that will require 24 v 7 availability and all the maintenance and escalation procedures that go along with a 24 v 7 infrastructure. 744

World Wide Web Application Security

Exhibit 8.

Application Security Design Requirements

Data Security Administration. A third set of administrators is required. The role of data security administrators is to work with data owners to determine how the information is to be protected, and then to develop the rules and policies that will be used by the management infrastructure and developers to protect the information.

TESTING The testing of one’s centralized architecture will resemble that of any other large-scale enterprisewide or client/server application. The overall test plan should include all the features listed in Exhibit 9. 745

LEVERAGING E-BUSINESS OPPORTUNITIES Exhibit 9. Testing Strategy Examples Test

Purpose

Functionality

To ensure that the infrastructure is functioning properly. This would include testing rules and policies to ensure that they are interacting correctly with the directory services. If custom administrative tools are required for the management of the directory, this would also include detailed testing to ensure that these tools are secure and functioning properly. Because the centralized infrastructure is the front end to multiple applications, it is important to do performance and scalability testing to ensure that the user management infrastructure does not become a bottleneck and adversely affect the performance and scalability of applications. Standard Internet performance testing tools and methods should be utilized. An important part of maintaining 24 v 7 availability is built-in reliability, fault tolerance, and failover. Testing should occur to ensure that the architecture will continue to function despite hardware failures, network outages, and other common outages. Because one’s user management infrastructure is ultimately a security tool, it is very important to ensure that the infrastructure itself is secure. Testing would mirror standard Internet and server security tests like intrusion detection, denial-of-service, password attacks, etc. The purpose of the pilot is to ensure that the architecture is implemented effectively and to help identify and resolve any issues in a small, manageable environment. Because the user management architecture is really a tool used by applications, it is best to integrate the pilot testing of the infrastructure into the rollout of another application. The pilot group should consist of the people who are going to be using the product. If it is targeted toward internal users, then the pilot enduser group should be internal users. If it is going to be the general Internet population, one should attempt to identify a couple of best customers who are willing to participate in a pilot test/beta program. If the member services organization will be administering accounts, then they should be included as part of the pilot to ensure that that process has been implemented smoothly. The pilot test should focus on all aspects of the process, not just testing the technology. If there is a manual process associated with distributing the passwords, then this process needs to be tested as well. One would hate to go live and have thousands of people access accounts the first day only to find out that one has never validated that the mailroom could handle the additional load.

Performance

Reliability and failover

Security

Pilot test

746

World Wide Web Application Security SUMMARY This chapter has demonstrated the variety of issues that need careful attention when an organization implements secure Web applications using third-party commercial software. An organization cannot simply purchase a packaged security solution and implement with the hope of immediate success. High-quality security is always based on a carefully designed security architecture that is developed and implemented by cross-functional teams. A third-party product often is an essential component of the security solution, but it is essential to evaluate the product, and its implementation, in the appropriate organizational context.

747

This page intentionally left blank

Section 5

Facilitating Knowledge Work

FACILITATING KNOWLEDGE WORK It has been more than two decades since the introduction of the IBM PC legitimized the direct usage of computer tools by the knowledge worker. In the ensuing years, the diffusion of personal productivity tools in the workplace continued to accelerate, aided by the maturation of many supporting technologies: local area networks, graphical user interfaces, personal productivity software suites, Internet navigation via a browser, increasingly portable devices, and software to support workgroups. Today’s typical knowledge worker in a U.S.-based firm is a savvy computer user who demands reliable support services for networked and Web-based applications in the workplace and, in many cases, anytime/anywhere. Facilitating knowledge work is therefore a critical information systems (IS) management role. To some employees, the information technology (IT) help line is the IT organization, and employee demands for zero downtime and fast responses by support personnel rival those of the demanding online buyer. There is also a heightened sensitivity to individual privacy and network security issues. The twelve chapters in this final section of the Handbook address one traditional and two newer end-user computing topics: • Providing Support and Controls • Supporting Remote Workers • Knowledge Management PROVIDING SUPPORT AND CONTROLS The first chapter on this topic — Chapter 59, “Improving Satisfaction with End-User Support” — describes some survey instruments and techniques to assess client satisfaction with IS organization support and the quality of specific support services provided. The authors recommend using multiple instruments to understand not only performance levels but also the importance of a given service to a user group, in order to best leverage scarce IT support resources. The author of Chapter 60, “Internet Acceptable Usage Policies,” argues for the importance of well-defined, clearly communicated policies. Acceptable usage policies have proved to be important safeguards against costly misunderstandings that can occur between employees and their managers, including potential legal actions. Useful guidelines for drafting and implementing these policies are provided. Chapters 61 and 62 present specific guidelines for managing the organizational risks associated with the development of applications by non-IT professionals. The authors of Chapter 61, “Managing Risks in User Computing,” describe in detail a wide range of end-user computing risks, and then outline the key elements of a control strategy to manage them. Chapter 62, “Reviewing User-Developed Applications,” provides a comprehensive 750

FACILITATING KNOWLEDGE WORK primer for establishing a review process for user-developed applications. A checklist that can be easily adapted for organization-specific audits is provided. Although management’s willingness to invest in IT security and disaster recovery procedures in general is at an all-time high, due to recent terrorist attacks within the United States, the IT security risks during workforce downsizing initiatives are not always well managed. Chapter 63, “Security Actions during Reduction in Workforce Efforts: What to Do When Downsizing,” details the nature of the security risks of which all managers should be aware. A checklist of actions to be taken as part of an IT security program is provided. SUPPORTING REMOTE WORKERS Today’s technologies enable new ways of working, including individuals telecommuting from home, satellite offices, or on-the-road, as well as working as members of a virtual team where the team members are not all colocated. Although the individual benefits include a better quality of worklife, telework today is being more widely embraced due to cost savings, productivity gains, and other organizational benefits. Chapter 64, “Supporting Telework: Obstacles and Solutions,” begins by describing the obstacles that can derail or minimize the benefits of a telework initiative: task- and resource-related obstacles as well as management-related obstacles. A detailed discussion of two categories of solutions is then provided: removing obstacles with technologies and removing obstacles with management actions — including modified reward mechanisms. Whereas Chapter 64 is concerned with implementation issues for a wide range of telework arrangements, Chapter 65, “Virtual Teams: The CrossCultural Dimension,” addresses some specific challenges faced by virtual teams. As the authors point out, virtual teams are separated not only by space and time, but also often by culture. The authors apply some crosscultural theory to explain why team members from different countries of origin may have different communication preferences and therefore prefer different communications technologies for a given task. Additional management actions may therefore be required for a virtual team to be successful. The objective of Chapter 66, “When Meeting Face-to-Face Is Not the Best Option,” is to challenge managers to test their own assumptions about whether a face-to-face meeting is required for a specific task, or whether the meeting goals can be accomplished via the use a technology alternative. This chapter can therefore be used to sensitize managers about alternative technologies to face-to-face meetings for different types of group 751

FACILITATING KNOWLEDGE WORK tasks. Checklists to use in preparation for replacing in-person meetings with conference calls, videoconferencing, or Web meetings are provided. KNOWLEDGE MANAGEMENT The 1990s witnessed widespread experimentation with groupware applications as robust document management tools such as Lotus Notes became available. Since then, many organizations have invested in Web-based technologies for document sharing across geographically separated organizational employees in different subunits, divisions, and nations. Yet knowledge management (KM) is still a thorny management issue for most IT leaders: it requires not only new software for qualitative document sharing, but also cultural changes. Based on the author’s experiences as an IT organization leader as well as an IT software innovator, Chapter 67, “Sustainable Knowledge: Success in an Information Economy,” begins by defining what is meant by institutional knowledge and the complex role of KM. The author then shares his views about what it takes to develop a knowledge culture that understands the organizational value of leveraging intellectual resources. Chapter 68, “Knowledge Management: Coming Up the Learning Curve,” begins with a general business case for KM initiatives. The author then shares three case examples of best practices in thinking about and deploying KM initiatives based on his own field research of leading manufacturing and service companies. The chapter ends with four prerequisite conditions for a successful KM initiative, including senior management endorsement and a culture that rewards knowledge sharing. The final two chapters in this section provide some useful templates for organizational KM initiatives. Chapter 69, “Building Knowledge Management Systems,” describes the structure and features of a repository model of a KM system. The author’s checklist can be used to select multiple KM tools to be integrated into a comprehensive KM system. Chapter 70, “Preparing for Knowledge Management: Process Mapping,” first describes the roles and responsibilities of a KM team and then focuses on a methodology for business process mapping as part of an enterprise-level KM initiative. The goal is to develop a knowledge library of both explicit and tacit business practices to leverage an organization’s know-how.

752

Chapter 59

Improving Satisfaction with End-User Support Nancy C. Shaw Fred Niederman Joo-Eng Lee-Partridge

Information technology (IT) professionals charged with supporting computer end users face an increasingly complex and difficult task. End users are becoming more sophisticated in their computer usage, are under more pressure to perform well, and generally expect better results from the information systems provided. From the perspective of the support providers, this environment requires supporting clients with an increasingly diverse set of technical competencies and needs, while at the same time supporting an expanding set of platforms, applications, tools, and procedures. Furthermore, what satisfies one client may not be important at all to another user. What satisfies users in a given workgroup may also not be the same as what satisfies users in another department. As a result, those responsible for managing end-user support face a difficult task in determining the level of support to offer each user group, and then delivering that support in a cost-effective and timely manner. The objective here is to first describe two types of survey instruments developed by members of the academic community to assess the following: (1) client satisfaction with support by an information systems (IS) organization, and (2) the quality of specific support services provided by an IS department. We then provide some examples of initiatives directed at two types of performance gaps: (1) positive gaps in which the IS department is committing more resources than needed, and (2) negative gaps in which the IS department is underperforming.

0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

753

FACILITATING KNOWLEDGE WORK Exhibit 1.

EUC Satisfaction Questions Disagree

1. Your relationship with the Office of Information Technology (OIT) staff is good. 2 . Your communication with the OIT staff is precise. 3. The attitude of the OIT staff is positive. 4. The degree of training provided to you is sufficient. 5. The speed of the responses to your requests for service is good. 6. The quality of the responses to your requests for service is good. 7. The information that is generated from your computing activities is relevant. 8. The information that is generated from your computing activities is accurate. 9. The information that is generated from your computing activities is precise. 10. The information that is generated from your computing activities is complete. 11. The information that is generated from your computing activities is reliable. 12. You are able to carry out your computing activities with speed. 13. Your understanding of the applications you use is good. 14. Your perceived participation in the information systems function is high.

Agree

1

2 3 4 5 6 7

1 1 1 1

2 2 2 2

1

2 3 4 5 6 7

1

2 3 4 5 6 7

1

2 3 4 5 6 7

1

2 3 4 5 6 7

1

2 3 4 5 6 7

1

2 3 4 5 6 7

1

2 3 4 5 6 7

1

2 3 4 5 6 7

1

2 3 4 5 6 7

3 3 3 3

4 4 4 4

5 5 5 5

6 6 6 6

7 7 7 7

SURVEY INSTRUMENT TO ASSESS CLIENT SATISFACTION Instruments for measuring user satisfaction have been developed and used by IS researchers for more than a decade. For measuring overall satisfaction with end-user computing, an instrument that has been widely used in various types of organizations is a 14-item questionnaire (see Exhibit 1). According to Mirani and King,1 this instrument measures three dimensions of end-user computing: service, information and knowledge, and participation. The data collected on these three dimensions can help the organization to differentiate areas of greater versus lesser satisfaction among users. A useful property of this instrument is that the 14 items can also be combined to obtain a single measure of satisfaction that has been statistically shown to provide a good indication of the overall satisfaction levels of end users within an organization. SURVEY INSTRUMENT TO ASSESS QUALITY OF SUPPORT SERVICES In addition to measuring end-user satisfaction, it is valuable to assess the user’s perceptions of service quality. Service quality can be measured by comparing user expectations of service with the perceived performance of the department or unit providing the service. Several IS researchers have 754

Improving Satisfaction with End-User Support developed instruments to measure IT service quality, based on an instrument (SERVQUAL) designed to measure service quality in retailing and service organizations in general.2 For example, a 22-item instrument has been developed specifically to measure the quality of IT support services for an IS department.3 This questionnaire first asks users to evaluate the 22 specific support items in terms of their importance to the user, and then the perceived performance of the IS department in providing these support items. The difference between these two measurements is called the service-quality gap. IS researchers generally view this instrument as a useful indicator of overall service quality,4,5 as well as quality of the specific services provided. Based on our experience, the results of this type of survey instrument can be used in several useful ways. First, after averaging the ratings for each service item, these items can be ranked from the highest to lowest in terms of importance to the user community. This provides insights about the services that are most highly valued by the user community. Second, the same items can be ranked from highest to lowest in terms of perceived performance provided by the IT support group. This provides an initial view on the perceived weaknesses and strengths of IT support. Third, for each item, the difference between importance and performance as perceived by the users provides a set of “gaps” that can also be ranked. Typically, some items will show a positive gap, which indicates a higher evaluation of performance over importance. In these cases, it is worth considering whether the IT support group might be putting too many resources into something that is not highly valued by the users. Of course, this conclusion should be tempered with professional expertise: users may perceive performance to be higher than importance for security, auditing, or backup services, but their views might change if resources were moved away from these less-visible but critical areas. Here, the IT support group might want to consider publicizing the importance of these services and broadcasting its successful performance in providing these services. In our experience, however, negative gaps are more frequently observed than positive gaps: a negative gap indicates that the IS department is underperforming (implying dissatisfaction with IS support). Most serious among these are cases in which the item is rated among the most important service by the client but the performance provided by the IS organization is viewed among the lowest. IT support group managers can use these survey results to focus their attention on specific services. Other scenarios include relatively high importance and moderate performance, and moderate importance and relatively low performance. In these cases, the actual service-quality gaps will be about the same magni755

FACILITATING KNOWLEDGE WORK tude, and both IS and user expertise will be important in setting realistic priorities for the particular setting. Note that neither of these instruments — even used together — addresses the costs of attempting to “fill these gaps.” It may be that expenditures of a modest nature can fill some large gaps, whereas some of the smaller gaps may require extraordinary investment. Additionally, staff should keep in mind the potential for addressing multiple service areas through single interventions (e.g., upgrading equipment). SEGMENTING USER GROUPS Actions to improve both the levels of client satisfaction and perceptions of specific service-quality support elements can also be enhanced by completing a separate analysis for different user groups. We have found that support factors can be shown to impact satisfaction for only a subset of the user population, and that segmenting the population into different user groups is crucial for understanding the true picture.6,7 More specifically, we have found that different user groups in an organization will have different rankings of importance for support factors, different ratings of the performance of the IT support department, and different factors that show larger or smaller service-quality gaps. Users can be segmented in many different ways: along functional lines, geographical locations, demographics, or whatever way makes sense for the organization. In the organizations we studied, multiple user groups (segmented along functional lines) showed significant differences in terms of what support was important to them, in how they assessed the service quality of the IS department, and in which support items influenced overall satisfaction. For example, while several support items were important to all groups (e.g., fast response time from IS staff, system response time), some support items were important to only one group (e.g., data security and privacy or degree of personal control). Another interesting finding came from comparing the importance rankings assigned by the IS department with those assigned by the user population as a whole. For example, user training was rated as more important to the IS staff, while personal control and ease of access were more important to the user population. This difference is important because our experience has shown that the IS department will frequently provide a higher level of service for items that are more important to them rather than for those items that are more important to the users. Identifying those services that are not highly valued by users is another useful way to determine which service improvement efforts will make the most difference in client assessments. In addition to finding differences in importance and performance ratings among user groups, we found differences in service-quality gaps among 756

Improving Satisfaction with End-User Support user groups. An examination of the negative gaps (implying dissatisfaction) for each user group showed that each group had a different subset of items with which they were dissatisfied. While fast response time from the IS staff showed the highest negative gap for one group, a positive attitude of the IS staff toward users showed the highest negative gap for another group. Segmentation of the user groups identifies the potential benefits derived from providing targeted support to specific groups. Increasing support for items with large service-quality gaps for the entire user population will miss those subsets of users who have different priorities and different needs. Our results confirm earlier research that showed a one-size-fits-all strategy to end-user support and control may not be the most effective IS management approach in practice.8 COMBINING SATISFACTION AND SERVICE-QUALITY ANALYSIS TO CREATE ACTION PLANS While these separate measures (end-user satisfaction and service-quality analysis) are valuable tools in their own right, combining these measurements offers additional insights that can be highly beneficial to IS managers interested in providing effective IT support.6,7 Nevertheless, understanding the relationship between end-user support factors and user satisfaction is subtle and multifaceted. Service-quality gap analysis, determined by the gap between importance and performance, is a good diagnostic tool to highlight specific factors that require improvement. However, focusing on factors with large gaps may not necessarily have the largest impact on user satisfaction. On the other hand, targeting factors with the most impact on satisfaction levels, but which are already adequately supported, may be a waste of resources. Correlating service-quality gaps for specific support factors with user satisfaction provides a richer picture. Managers can then direct their efforts to factors that not only have a significant impact on user satisfaction, but also have a larger service-quality gap. One IS department’s experience in creating an action plan aimed at influencing end-user attitudes provides a useful example.6 After reviewing the results of both types of surveys, the IS support group instituted a series of management interventions specifically aimed at increasing user satisfaction and improving service quality. Some examples of their interventions included providing additional full-time help desk staff, extending help desk hours, and creating a help desk Web site. Some of their interventions were targeted at decreasing the service-quality gaps for several support factors, including “technical competence of the IS staff” and “staff response time.” In a follow-up assessment, it was found that seven of the eight targeted support factors showed improvement. Further, in this situation, the technical competence of the support staff was a highly salient issue to one user group, and less so to another user group, and by visibly addressing this 757

FACILITATING KNOWLEDGE WORK issue there was a significant change in satisfaction levels for the user group to whom this issue was most important. CONCLUSION While analyzing the levels of satisfaction and the importance-performance gaps for IT services is not an exact science, research has provided some tools that may prove helpful. Survey instruments related to levels of satisfaction can be used to assess different dimensions of satisfaction. SERVQUAL instruments can be used to better understand the priorities of end users regarding services and the perceptions of how the IS department is currently performing. Analysis using both satisfaction and SERVQUAL measures can also identify which service elements have the most influence on satisfaction. Running separate analyses for different user groups can further show distinctions in the priorities of each group. The combined effect of using these various survey approaches is a reasonably detailed picture of users’ attitudes regarding IT service provision. Service improvement efforts can then be targeted toward the support factors that impact the most user groups or toward high-priority user groups. Failing to evaluate user group differences can lead to missed opportunities when allocating scarce end-user support resources. Considering the relationship between end-user support factors and user satisfaction from various perspectives can significantly enhance the richness of understanding users’ priorities relative to IT support. This richer perspective can in turn lead to more targeted, and potentially cost-effective, interventions as part of a continuous improvement of IT service delivery. References 1. Mirani, R. and King, W., “The Development of a Measure for End-User Computing Support,” Decision Sciences, 25(4), 481–499, 1994. 2. Parasuraman, A., Zeithaml, V.A., and Berry, L., "More on Improving the Measurement of Service Quality," Journal of Retailing, 69(1), 140–147, 1993. 3. Pitt, L., Watson, R., and Kavan, C., “Service Quality: A Measure of Information Systems Effectiveness,” MIS Quarterly, 19(2), 173–187, 1995. 4. For a complete discussion of the SERVQUAL instrument and its use in the IS environment, see Kohlmeyer, J. and Blanton, J., “Improving IS Quality,” Journal of Information Technology Theory and Application, 2(1), 2000. http://imz007.ust.hk, accessed October 24, 2002. 5. For a complete discussion of the SERVQUAL instrument and its use in the IS environment, see Jiang, J., Klein, G., and Carr, C., “Measuring Information Systems Service Quality: SERVQUAL from the Other Side,” MIS Quarterly, 26(2), 145–166, 2002. 6. Shaw, N., Delone, W., and Niederman, F., “An Empirical Study of Success Factors in EndUser Support,” The DATA BASE for Advances in Information Systems, 33(2), 41–56, Spring, 2002. 7. Shaw, N., Lee-Partridge, J.-E., and Ang, J., “Understanding End-User Computing through the Use of Technological Frames,” Journal of End-User Computing, in press, 2003. 8. Speier, C. and Brown, C., “Differences in End-User Computing Support and Control across User Departments,” Information & Management, 32(2), 85–99, 1997.

758

Improving Satisfaction with End-User Support Further Reading Ives, B., Olson, M., and Baroudi, J. “The Measurement of User Information Satisfaction,” Communications of the ACM, 26(10), 785–793, 1983. Remenyi, D.S., Money, A.H., and Twite, A., A Guide to Measuring and Managing IT Benefits, NCC Blackwell Limited, Oxford, England, 1991.

759

This page intentionally left blank

Chapter 60

Internet Acceptable Usage Policies James E. Gaskin

The job of an acceptable use policy is to explain what the organization considers acceptable Internet and computer use and to protect both employees and the organization from the ramifications of illegal actions. This chapter describes how such policies are written, what they should cover, and how they are most effectively activated. Now that an Internet connection is a requirement, IS executives are responsible for more work than ever before. One vital area on the to-do list is to write, update, or implement the company’s Acceptable Usage Policy for Internet use. The company may refer to this as an Internet Use Policy, the networking portion of the Computer Use Policy, or the Internet addition to your Personnel Manual. No matter the name, the role of such a policy is the same: it lists the rules and standards the company believes are important for employees using computers, networks, and particularly the Internet. Although the Acceptable Use Policy can be incorporated into other existing documents, it is generally taken more seriously and provides more company protection if it is a separate document. Why is the Acceptable Use Policy so important today? Legal liability for Internet actions can quickly shift from the employee to the employer. After all, if the Internet is filled with obscenity and other illegal temptations, the company should provide protections for the employees. If management knowingly allows access to inappropriate Internet sites without either warning the users or blocking that access, management climbs on the liability hook with the actual employee performing illegal actions. WRITING AN ACCEPTABLE USE POLICY You, or someone in your department, must write the Acceptable Use Policy. It is better to have the fewest number of people possible involved in writing the Acceptable Use Policy. The very best number of authors is one. 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

761

FACILITATING KNOWLEDGE WORK This may grate against corporate culture, where technical documents often see more hands than a public washroom sink. While there are few excuses for the amount of tampering and changing that goes on with technical documents, the Acceptable Use Policy goes beyond a product manual or marketing white paper. You and your management must consider the Acceptable Use Policy as a legal document that binds the behavior of employees within certain boundaries explained within the document. Limiting the number of authors limits the number of viewpoints within the Acceptable Use Policy. Your employees must have no doubt why they have been given the Acceptable Use Policy, what their responsibilities are in regard to Internet and computer use, and what the penalties are for misuse of company resources, including time. More authors, or up-the-line editorial changes, will muddy the Acceptable Use Policy. Internal contradictions within the Acceptable Use Policy will leave loopholes for employee lawyers to exploit. After the Acceptable Use Policy is written, the committee to oversee employee compliance with the terms of the agreement should be created. The committee should meet and approve the Acceptable Use Policy before distributing the document. This is the time for any comments, suggestions, additions, or deletions to the Acceptable Use Policy. While all on the committee are welcome to offer changes to the document, only the author should implement those changes. Again, the consistency of viewpoint is important. Legal review comes after the committee has approved the Acceptable Use Policy contents and related documents. This brings us to a philosophical decision: lawyers want long, complicated documents that spell out every possible infraction and associated punishment, while business managers want short documents that can be interpreted in the company’s favor. Your decision on the Acceptable Use Policy length and completeness will reflect your corporate culture and the wishes of upper management. Your Acceptable Use Policy will be considered part of the Employee Handbook. Some states regard these handbooks as a legal contract, and others do not. Your corporate counsel will be able to answer that question for the states where your company has operations. If it matters, be aware that the number of employees who read your Acceptable Use Policy approaches zero as the document lengthens. Simply put, the longer the document, the fewer readers. In most states, employees are bound by the conditions of the Acceptable Use Policy regardless of whether or not they have read and signed the document. However, holding employees liable for a document they have not read will be seen as a cold, heartless corporate maneuver. Employees who feel betrayed contact lawyers far more often than those who feel they were treated fairly. Although 762

Internet Acceptable Usage Policies it is legal in some states for companies to ignore the promises they make in Employee Handbooks, the antagonism generated within the employee ranks by that mode of operation guarantees more lawsuits than following your own written guidelines. SCOPE AND OVERVIEW OF THE POLICY Does your company already have policies concerning computer use? How about company telephone, fax, and U.S. mail use? Is there a security policy in place? Some companies, remiss in providing policies in the past, try to cram everything into the Acceptable Use Policy. This is legal, but confusing to the employees. Your Acceptable Use Policy will be more valuable if targeted strictly to Internet and other computer-networking concerns. E-MAIL Because e-mail is the most popular Internet application, e-mail control is important. The good part of e-mail is that there is a strong analogy to something all users are familiar with, namely, physical mail. One company includes the following excellent statement: Remember that e-mail sent from the company travels on the company’s electronic stationary. Your e-mail appears to the recipient as if it were sent on company letterhead. Your security policy, if separate, should cover information about e-mail accounts, such as forging identities (not good). If it does not, or you wish to put all e-mail information in your Acceptable Use Policy, feel free. You can easily make the argument that e-mail information belongs in your Internet usage document. Here are a few more things different schools and companies warn clients about e-mail use: • • • • • •

Sending harassing, obscene, or other threatening e-mail is illegal. Sending junk mail, for-profit messages, or chain letters is prohibited. Take all precautions against importation of computer viruses. Do not send or receive sexually oriented messages or images. Do not transmit confidential company information. Employee medical, personal, or financial information must never be divulged. • Personal messages are prohibited (or limited, or freely allowed, depending on your policy). The other important point users should be told is that you will, definitely, read e-mail messages at times. Whether or not an employee must be 763

FACILITATING KNOWLEDGE WORK told when the company monitors communications is advisable according to some lawyers, but not others. Either way, if every employee signs the Acceptable Use Policy saying they will be monitored on a random basis, there will be little wiggle room if they complain later. They will also pay more attention to following the rules when they know someone will be monitoring their messages. Let me add one more bullet for the list two paragraphs earlier: • Your e-mail messages will be kept and periodically reviewed before being deleted. This should leave no doubt that messages from your users will be reviewed. Your user should have no expectation that their e-mail messages are private and protected by any type of privacy law. Make sure each user understands that some messages will be read, even if messages are only spot-checked. Employees must understand that every message they send or receive may be read by management. Do not keep e-mail messages for longer than 90 days, if that long. Why? Lawyers are now routinely demanding e-mail archives during lawsuit discovery. If your company is sued for any reason, the opposing lawyers will try to read all internal and external e-mail messages for the time period in question. No e-mail archives means no embarrassing quotes and off-thecuff remarks that will cost you in court. Some large companies refuse to back up e-mail files for this reason. WORLD WIDE WEB RESOURCES AND NEWSGROUPS The Web takes the brunt of criticism when the Internet is blasted as a giant productivity sink hole. Corporate managers rank employee time wasted as their number-two concern about Internet access, right behind security. Your management will also start wondering how many employees are frittering away hours at a time perusing the Web on company time using company equipment. Newsgroups have somewhat the same reputation, since there are over 20,000 newsgroups, only a few of which pertain to your business. While newsgroups full of equivalent professionals in other companies provide great benefit to your company employees, the nontechnical press focuses on the “alt.sex.*” hierarchy of newsgroups. Someone in your management will be determined to limit access to all newsgroups, just to keep the alt.sex.* groups out of the company. Do not lie to management or employees in your Acceptable Use Policy. Yes, there are inappropriate Web servers and newsgroups. Yes, some Web servers and newsgroups are valuable. Yes, you can monitor and track each 764

Internet Acceptable Usage Policies user of any network resource by name, date, time online, and amount of material downloaded from any inappropriate network source. In other words, you can log the actions of each and every corporate user during each and every network communication. If you do not have the proper firewall or proxy server in place yet to monitor your users, get one. You can, however, get one after the Internet connection is available. Better late than never. After all, your employees will be told what the company considers inappropriate in the Acceptable Use Policy. Realize that some time will be wasted on the Web, just as time is wasted reading through trade magazines looking for articles that apply to your company. Every profession has trade magazines that offer articles and information in exchange for presenting advertising to the reader. The Web, to some people, is becoming nothing more than a huge trade magazine, offering helpful information interspersed with advertising. In a sense, the Web is not new, it is just advertising delivered by computer rather than by magazine. Treat it similarly. As some employees research information more than others, they will use their Web client more than others. Information-dependent employees will surf quite a bit; clerks and production employees should not. You may mention your guidelines for the Web in your Acceptable Use Policy, or you may prefer to ignore the Web. Some sample restrictions may include • Viewing, downloading, displaying, or distributing obscene images is illegal. • While the Web encourages wandering, remember your focus during work hours remains business. The first bullet point is not optional — remind your employees regularly that obscenity in the workplace will not be allowed. The second bullet point is optional, and should be modified to match your comfort level regarding employee use of the Web. Let’s see some of the restrictions other Acceptable Use Policies have listed for newsgroup activity, plus a few I have added: • Downloading or uploading non-business images or files is prohibited, and possibly illegal. • Sending harassing, obscene, or other threatening posts is illegal. • Sending junk posts or “for-profit” messages is prohibited. • Post articles only to groups supporting that subject matter. • Do not post company advertisements of any kind in any newsgroup. 765

FACILITATING KNOWLEDGE WORK • Posting messages without your real name attached is prohibited. • Copying newsgroup information to any other forum is illegal (copyright infringement). Newsgroups are where the majority of defamation happens; flame wars encourage angry responses rather than clear thinking. Often, other readers of the newsgroup will send copies of messages to the postmasters of the flame war participants. Whether the messages indicate a flame war that is getting out of hand or just unprofessional statements, it is best to visit your involved employee and counsel restraint. If kind words do not settle your employee, unplug them from the newsgroup access list. No sense risking a lawsuit when you know there is a good chance of things being said that have no positive value to your company. Several Acceptable Use Policies address defamation somewhat obliquely. Here are some examples of the language included in those policies: • …including comments based on race, national origin, sex, sexual orientation, age, disability, religion, or political beliefs. • …inappropriate uses … to send/receive messages that are racist, inflammatory, sexist, or contain obscenities. Whether these are politically correct or good business sense depends on the individual company. However, reading “you can’t understand, because you’re a [blank]” in a global forum such as an Internet newsgroup will not endear anyone to the employee making that statement. Your company will suffer loss of customer good will at the least, and may be sued for defamation. These same courtesy restrictions apply to e-mail, but e-mail lacks that extra edge brought when thousands of readers see your company name attached to the ranting of one overwrought employee. IRC (Internet Relay Chat) and MUDs (Multi-User Domains) have not been mentioned because they have no redeeming professional use. No employee use of such activity should be tolerated. In case employees are confused about whether or not the company’s rights to monitor employee activity extend to the computers, include a line such as this: • All computer communications are logged and randomly reviewed to verify appropriate use. Notice the words are “appropriate use.” If your Acceptable Use Policy says the words “indecent images,” your employees (and their lawyers) will argue about that wording. “Indecent” is in the eye of the beholder. “Obscene,” however, is a legal term that applies just as well to computers as to maga766

Internet Acceptable Usage Policies zines, books, and videos. Better to stick with “inappropriate” if possible, because that covers more activities than any other term. Penalty for misuse should range up to and include termination. If an employee must be terminated, do so for work-related causes, rather than mention the word Internet. Free speech advocates get involved when an employee is fired for inappropriate use of the Internet, but not when an employee is terminated for wasting too much time on the job and disobeying orders. NETIQUETTE ADDENDUM Some companies spell out appropriate e-mail, newsgroup, and Web communication guidelines within their Acceptable Use Policy. This is a noble endeavor, but slightly misguided. Your company guidelines toward Internet communications are likely to change more often than your restrictions on inappropriate Internet use and discipline for infractions. Since the Acceptable Use Policy should be signed by each employee if possible, any changes to netiquette embedded in the Acceptable Use Policy will require a new signature. The logistics of this process quickly become overwhelming. Put the rules of Internet behavior in a separate Netiquette Addendum, attached to the Acceptable Use Policy. In this way, changes to e-mail rules, for instance, will not negate the Acceptable Use Policy in any way, nor will anyone believe a new signature is necessary. ACTIVATING THE POLICY WITH OR WITHOUT SIGNATURES As briefly mentioned in the preceding section, getting signatures on the Acceptable Use Policy can be tricky. Small- to medium-sized companies can handle the logistics of gathering signed copies of the Acceptable Use Policy, although there will still be considerable amount of time expended on that effort. Large companies may find it impossible to ship paper policies all over the world for signatures and get them back signed, no matter how much time and effort they devote. The best case is to get a signed Acceptable Use Policy from each employee before that person is connected to the Internet. Training classes offer an excellent chance to gather signatures. If software must be installed on client computers, the Acceptable Use Policy should be presented, explained, and signed during software loading. Reality intrudes, however, and ruins our best case. Many companies already have granted Internet access before developing their Acceptable Use Policy. This is not the wisest course, but it is common. Other companies do not offer training or cannot physically gather signed copies. 767

FACILITATING KNOWLEDGE WORK It is important to send copies of the Acceptable Use Policy to each employee with Internet access. Copies should also be posted in public places, such as break rooms and department bulletin boards. Add the policy to the existing Personnel Manual or Employee Handbook. Send an email to users every quarter reminding them of the Acceptable Use Policy and where they can read a copy if they have misplaced theirs. Public attempts will blunt any disgruntled employee contentions they did not know about Internet restrictions. THE ACCEPTABLE USE POLICY COMMITTEE The Acceptable Use Policy Committee should be carefully formed. Department managers should participate in the selection process for employees in their group, so as not to ruffle feathers or step into the middle of some other disagreement. Give each member plenty of warning before the first meeting and provide background information quickly. Who should be included? The following list contains the requisite positions and their expected contribution: • Computer systems manager: technical details of Internet access and monitoring. • Company lawyer or Human Resources official: legal aspects of workplace rules. • Executive management representative: guarantees your committee will not be ignored. • Union representative: laws for union workers vary from those covering other employees. • The “One Who Knows All,” or a general power user: provides employee concerns and input. What is the committee responsible for, and to whom? Everything concerning the Internet, and everyone. How often should the committee meet? At the beginning, every two weeks. Once the Internet connection is old news, once a month may be enough. The interval is dictated by the number of security incidents and employee discipline actions to be resolved. In extreme cases, such as an employee action that could result in company liability or criminal prosecution for someone, the committee must meet immediately. The grievance policy in cases of Internet abuse should be clear and well known to all employees who care to ask. It is important that all employees know who sits on the Acceptable Use Policy committee. Secret committees are repressive, but open committees can encourage goodwill within the company. Strongly consider setting up 768

Internet Acceptable Usage Policies an internal e-mail address for your committee, and use it for questions and as an electronic suggestion box. The most effective deterrent to misdeed is not the severity of discipline but the inevitability of discovery. Remember, your goal is to make the Internet serve the company, not to find excuses to discipline or fire employees. After the first committee meeting, the following questions should be answered: • Will employees be fired for Internet misuse? • What is the penalty for the first offense? The third? The fifth? • Will the police be called for stolen software or obviously obscene images? • Where must your other employee policies be modified to support your Internet connection? • Are any insurance policies in place to protect against hackers or employee misdeed? Should some be added? • How often will employees be reminded of company Internet guidelines? How will this be done? Discipline is particularly tough when discussing the Internet. After all, if an employee is wasting hours per day on the Internet, the department manager should be disciplined for improper management. Waste of time on the Internet is not a technology issue, but a management issue. Although the department manager should be disciplined, that same manager should be the one to discipline the employee. Outsiders with an executive mandate to punish miscreants are never popular and often are sabotaged by the very employees they should oversee. Keep the department managers in the loop as long as possible. Exceptions to this approach include security violations and illegal acts. In those cases, the department manager must be informed, but company security or the local police will handle the situation. These cases are never pleasant, but do not be naíve. If you believe none of your employees could act illegally, you must be new to management. CONCLUSION The job of the Acceptable Use Policy is to explain what the company considers acceptable Internet and computer use and behavior. The committee dedicated to enforcing the provisions of the policy must publicize the Acceptable Use Policy and monitor employee compliance. Infractions must be handled quickly, or the employees will assume nothing in the Acceptable Use Policy is really important, and compliance levels will shrink. Proactive Internet management will drastically lower the chances of Internet-related lawsuits, arguments, and misunderstandings. 769

This page intentionally left blank

Chapter 61

Managing Risks in User Computing Sandra D. Allen-Senft Frederick Gallegos

User computing and application development has taken on greater importance in organizations because of the perception that users can develop applications faster and cheaper than the information systems (IS) group. Many of these applications have become critical to sales, operations, and decision making. However, there are many risks associated with user computing that need to be managed to protect resources and ensure information integrity. What is needed is a commitment to manage user computing to take advantage of speed and flexibility while controlling risks. SPECIFIC RISKS IN USER COMPUTING Because PCs seem relatively simple and are perceived as personal productivity tools, their effect on an organization has largely been ignored. In many organizations, user computing has limited or no formal procedures. The control or review of reports produced by user computing is either limited or nonexistent. The associated risk is that management may be relying on user-developed reports and information to the same degree as those developed under traditional centralized IS controls. Management should consider the levels of risk associated with user applications and establish appropriate controls. Risks associated with user computing include: • • • • • • • •

Weak security Inefficient use of resources Inadequate training Inadequate support Incompatible systems Redundant systems Ineffective implementations Copyright violations

0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

771

FACILITATING KNOWLEDGE WORK • • • •

The destruction of information by computer viruses Unauthorized access or changes to data and/or programs. Unauthorized remote access. Reliance on inaccurate information.

Each of these risks is discussed in the remainder of this chapter. Weak Security Information systems security should be a concern of IS, users, and management. However, security, for many companies, is not a top priority. In a 1997 survey conducted by Infosecurity News, respondents indicated that information security had improved but cited significant obstacles to reducing security risks. The most significant obstacles cited include lack of funds, lack of employee training, lack of user awareness, technical complexity, unclear responsibilities, lack of senior-management awareness, lack of senior-management support, and lack of good security tools. Today, however, there is a heightened overall awareness. The primary concerns in security involve educating management and users on the exposures to loss through the use of technology. Users’ main focus is getting the work done, and management’s primary focus is on the bottom line. The auditor’s responsibility is to inform management and users on how security can enhance job performance and protect the bottom line. Inefficient Use of Resources User development may at first appear to be relatively inexpensive compared with traditional IS development. However, a number of hidden costs are associated with user computing that organizations should consider. In addition to operation costs, costs may increase due to a lack of training and technical support. Lack of user training and their inexperience may also result in the purchase of inappropriate hardware and the implementation of software solutions that are incompatible with the organization’s systems architecture. Users may also increase organization costs by creating inefficient or redundant applications. For example, we see redundant data, inconsistent data, conflicting naming conventions, and data timing issues in these type of applications. In one financial company, user-developed systems were causing a problem with the general ledger system due to the timing of the transfer of transactions. Data was transferred late, causing end-of-the-month reports to be inaccurately stated. Managers who met to review reports of the prior month’s activity noticed a shortfall of $50,000 in some accounts. In another situation, auditors reviewed revenue reports from a department and noted some inaccuracies and questioned managers about the 772

Managing Risks in User Computing discrepancies. Discrepancies were traced to a user-developed support system that had been integrated into the overall reporting process. The manager stated that information provided by the centralized support system did not meet their need and their new system “was more” accurate. When the auditors checked the user-developed system, they found some questionable manipulation of the information not in accordance with the company’s reporting policy. Inadequate Training Organizations may decide not to invest in training by looking at the upfront costs alone. According to one study by the Gartner Group and a recent study by the U.S. National Institute of Standards and Technology, the cost of not training will far exceed the investment organizations will make to train both users and IS professionals in new technologies. One reason for this paradox is that users who are forced to learn on their own take as much as six times longer to become productive with the software product. Self-training is also inefficient from the standpoint that users tend to ask their colleagues for help, which results in the loss of more than one individual’s time, and they may also be learning inappropriate or inefficient techniques. Both studies also showed that an effective training program reduces support costs by a factor of three to six, because users make fewer mistakes and have fewer questions. Inadequate Support The increasing complexity of technical environments and more sophisticated PC tools has fueled the increased demand for user support. Because traditional IS departments do not have the staffing or the PC knowledge to help user departments, users have turned to “underground support” (i.e., support by peers or department-purchased outsourcing) to fill the gap. The previously mentioned studies found that the need for support is inelastic, and the gap between needed support and formal support is filled by underground support. This underground support accounts for as much as 30 percent of user computing costs). Users need “focal points” as “local” as possible for assistance. A focal point is a functional support person. Many times, the functional support person is an accomplished user. However, without a central support organization, there may be limited coordination between user departments to ensure that procedures are consistent and that applications are compatible. Incompatible Systems User designed applications that are developed in isolation may not be compatible with existing or future organizational information technology architectures. Traditional IS systems development verifies compatibility with 773

FACILITATING KNOWLEDGE WORK existing hardware and related software applications. Hardware and software standards can help ensure the ability to share data with other applications in the organization. As mentioned earlier, inconsistent data, conflicting naming conventions, and data timing problems are very common types of problems that occur in user-developed systems. Inconsistent data often appears in the conflict between a user-generated report and a Corporate Report. Data used to create reports may be from different sources than the Corporate Reports, thus these “unofficial” reports are used for decision making that may conflict with Corporate interpretation. Redundant Systems In addition to developing incompatible systems, users may be developing redundant applications or databases because of the lack of communication between departments. Because of this lack of communication, user departments may create a new database or application that another department has already created. A more efficient implementation process has user departments coordinating their systems application development projects with IS and meeting with other user departments to discuss their proposed projects. Redundant data can cause increased storage costs and require more processing complexity in resolving the redundancies. For example, mapping software was used to analyze a user-developed system in two different departments. These systems were 90 percent redundant. Discussion with staff indicated that the second system evolved because they wanted some changes made to the structure of the report but the other department did not have the “resources” to customize it. Ineffective Implementations Users typically use fourth-generation languages, such as database or Internet Web development tools, to develop applications. In these cases, the user is usually self-taught. And, they lack formal training in structured applications development, do not realize the importance of documentation, and omit necessary control measures that are required for effective implementations. In addition, there is no segregation of duties, because one person acts as the user, systems analyst, developer, and tester. With sufficient analysis, documentation, and testing, user-developed systems will better meet management’s expectations. The Absence of Segregation of Duties. Traditional systems application development is separated by function, tested, and completed by trained experts in each area. In many user development projects, one individual is responsible for all phases, such as analyzing, designing, constructing, test774

Managing Risks in User Computing ing, and implementing, of the development life cycle. There are inherent risks in having the same person create and test a program, because they may overlook their own errors. It is more likely that an independent review will catch errors made by the user developer, and such a review helps to ensure the integrity of the newly designed system. Incomplete System Analysis. Many of the steps established by central IS departments are eliminated by user departments. For example, the analysis phase of development may be incomplete, and all facets of a problem may not be appropriately identified. In addition, with incomplete specifications, the completed system may not solve the business problem. Users must define their objectives for a particular application before they decide to purchase existing software, to have IS develop the application, or to develop the application themselves. Insufficient Documentation. Users typically focus on solving a business need and may not recognize the importance of documentation. Any program that is used by multiple users or has long-term benefits must be documented, particularly if the original developer is no longer available. Documentation also assists the developer in solving problems or making changes to the application in the future, in addition to facilitating testing and familiarizing new users to the system. Inadequate Testing. Independent testing is important to identify design flaws that may have been overlooked by the developer of a system. Often, the individual who creates the design will be the only one testing the program so that he is only confirming that the system performs exactly as he designed it. The user should develop acceptance criteria that can be used in testing the development effort. Acceptance criteria help to ensure that the users’ system requirements are validated during testing. For example, the National Institute of Standards and Technology has created a forum of developers and users to exchange testing and acceptance criteria on new IS security products.

Copyright Violations Software programs can easily be copied or installed on multiple computers. Organizations are responsible for controlling the computing environment to prevent software piracy and copyright violations. The Copyright Act of 1976 makes it illegal to copy computer programs except for backup or archival purposes. Any business or individual convicted of illegally copying software is liable for both compensatory and statutory damages of up to $100,000 for each illegal copy of software found on the premises. Software piracy is also a federal crime that carries penalties of up to five years in jail. The Software Publishers Association (SPA) was established in 1988 to promote, protect, and inform the software industry 775

FACILITATING KNOWLEDGE WORK regarding copyright issues. Ten years later, the SPA represented 1200 members with 85 percent of the PC software market share. The SPA receives information from disgruntled employees and consultants about organizations that use illegal software. An organization faces a number of additional risks when they tolerate software piracy. Copied software may be unreliable and carry viruses. Litigation involving copyright violations are highly publicized, and the organization is at risk of losing potential goodwill. Furthermore, tolerating software piracy encourages deterioration in business ethics that can seep into other areas of the organization. The key to controlling the use of illegal software rests with the user. Organizations should inform users of the copyright laws and the potential damages that result from violations of those laws. When users are given access to a personal or desktop computer, they should sign an acknowledgment that lists the installed software, the individual’s responsibilities, and any disciplinary action for violations. In addition, written procedures should detail responsibility for maintaining a software inventory, auditing compliance, and removing unlicensed software. The Destruction of Information by Computer Viruses Most users are knowledgeable about virus attacks, but the effect of a virus remains only a threat until they actually experience a loss. A virus is the common term used to describe self-reproducing programs (SRP), worms, moles, holes, Trojan horses, and time bombs. In today’s environment, the threat is great because of the unlimited number of sources from which a virus can be introduced. For example, viruses can be copied from a diskette in a floppy drive or downloaded from a remote connection through a modem. A virus is a piece of program code that contains self-reproducing logic, which piggybacks onto other programs and cannot survive by itself. A worm is an independent program code that replicates itself and eats away at data, uses up memory, and slows down processing. A mole enters a system through a software application and enables the user to break the normal processing and exit the program to the operating system without logging off the user, which gives the creator access to the entire system. A hole is a weakness built into a program or system that allows programmers to enter through a “backdoor,” bypassing any security controls. A Trojan horse is a piece of code inside a program that causes damage by destroying data or obtaining information. A time bomb is code that is activated by a certain event, such as a date or command. Viruses can also be spread over telephone lines or cables connecting computers in a network. For example, viruses can spread when infected files or programs are downloaded from a public computer bulletin board. 776

Managing Risks in User Computing Viruses can cause a variety of problems: • • • • •

Destroy or alter data Destroy hardware Display unwanted messages Cause keyboards to lock (i.e., become inactive) Slow down a network by performing many tasks that are really just a continuous loop with no end or resolution

A virus can consume processing power and disk space by replicating itself multiple times. The risk to organizations is the time involved in removing the virus, rebuilding the affected systems, and reconstructing the data. Organizations should also be concerned with sending virusinfected programs to other organizations. Viruses cause significant financial damage as well as staff time to clean up, and recipients may file lawsuits against the instituting organization. Unauthorized Access or Changes to Data or Programs Access controls provide the first line of defense against unauthorized users who gain entrance to a system’s programs and data. The use of access controls, such as user IDs and passwords, are typically weak in user-controlled systems. In some cases, user IDs and passwords may be shared or easily determined. This oversight can subject applications to accidental or deliberate changes or deletions that threaten the reliability of the information generated. Programs require additional protection to prevent unexpected changes. To prevent accidental changes, users should be limited to execute only. Unauthorized Remote Access More and more users are demanding remote access to LAN services. The easiest method to provide security is to eliminate modem access completely. With weak access controls, a modem allows virtually anyone access to an organization’s resources. To protect against unauthorized access, remote dial-up access should have a callback feature that identifies the user with a specific location. A more sophisticated solution is to have key cards with encrypted IDs installed on the remote terminal and a frontend server on the host. At a minimum, user IDs and passwords should be encrypted when transmitted over public lines. In addition, confidential data that is transmitted over public lines should be encrypted. The security solution depends on the sensitivity of the data being transmitted. Reliance on Inaccurate Information Accurate information is an issue, whether the user is accessing a database on the mainframe or a departmental database on a PC. Users may be asked to generate a report without fully understanding the underlying informa777

FACILITATING KNOWLEDGE WORK tion, or they may not be sufficiently trained in the reporting application to ask the appropriate questions. Additional complications occur when users download information from the mainframe for analysis and reporting. Departmental databases may have redundant information with different timeframes. The result is wasted time in reconciling two databases to determine which data is accurate. USER COMPUTING CONTROLS Strategy Written strategy helps guide users in implementing technology solutions that satisfy corporate objectives. A strategy document should include the future direction of the organization, how technology will be used, how technology will be managed, and what role IS and users will fill. A high-level strategy guides in the acquisition, allocation, and management of technology resources to fulfill the organization’s objectives. Standards Standards guide users in selecting hardware, software, and developing new applications. Hardware and software standards ensure compatibility between user groups and ease the burden of technology integration and technical support. Application development standards help ensure that user requirements are adequately defined, controls are built-in, testing is thorough, users are trained, systems are adequately documented, and changes are adequately controlled. Policies and Procedures A policy statement should communicate the organization’s stand on such issues as systems architecture, testing and validation of requirements/systems, and documentation. The areas are critical to establishing an institutional process for managing user-developed applications. These Policies and Procedures should be developed by IT management and EUC groups to provide direction and governance over this area. Systems Architecture is the foundation of any information system. Systems proposed should be upward compatible with the evolving architecture and within the strategic vision of the organization. Testing and validation of requirements/systems provide a structured process for systematically testing and validating the user-developed system to assure it meets corporate and user requirements. Finally, documentation provides the means to capture the requirements and eventually improve corporate systems to meet the users’ needs. 778

Managing Risks in User Computing Other major areas are unlicensed software, information privacy and security, virus prevention, and backup/recovery. The policy statement on unlicensed software should include the removal of unlicensed software and disciplinary action. Information privacy and security should include data classification, encryption policy, and procedures for users and remote access. Virus prevention should include standard virus protection software, regular virus definition updates, and checking downloaded files and shared floppy disks. Backup and recovery procedures should define responsibility for data on all platforms (e.g., LAN, desktop). Once policy and procedures are completed, they need to be communicated to all users and enforced through periodic audits. A number of professional societies have issued general guidance and guidelines to help and assist managers in this area. Organizations such as the Association of Information Technology Professionals (AITP), Society for Information Management (SIM), International Federation of Accountants (IFAC), and the Information Systems Audit and Control Association (ISACA) in their recent Control Objectives for Business Information Technology (COBIT) are examples of professional societies who recognize the need for general guidance. CONCLUSIONS User computing faces many of the same risks as traditional information systems, but without the benefit of management resources and controls that were developed over a long period of time for systems designed for mainframe computers. Additional risks, which should provide added reasons for managing and controlling user computing, are inherent in using PCs and distributed computing. User computing should be incorporated into the overall information systems strategy of the organization and be recognized as a resource that must be properly managed. From the IT management perspective, the manager should ensure that: • There are policy and procedures, and standards for user- developed applications. • User-developed systems are consistent with corporate IT architecture, short term and long term. • Risks from user-developed systems are minimized through planning, review, and monitoring to reduce organization-wide impact from incompatible systems, redundant systems, and other vulnerable areas cited. • An IT user committee be established to assist and support management of user-developed applications within the organization to facilitate communication at all levels. 779

FACILITATING KNOWLEDGE WORK The IT manager cannot ignore the increased number of users developing complex applications and the corresponding reliance by management to base decisions on the data produced by these applications. This makes careful evaluation of user computing groups a must for any manager. A failed application can do serious damage.

780

Chapter 62

Reviewing User-Developed Applications Steven M. Williford

In most organizations, sophisticated users are building their own applications. This trend began with computer-literate users developing simple applications to increase their personal productivity. User applications development has since evolved to include complex applications developed by groups of users and shared across departmental boundaries throughout the organization. Data from these applications is used by decision makers at all levels of the company. It is obvious to most user developers and IS managers that applications with such organizationwide implications deserve careful scrutiny. However, they are not always familiar with an effective mechanism for evaluating these applications. The method for reviewing user-developed applications discussed in this chapter can provide information not only for improving application quality but for determining the effectiveness of user computing (and end-user support) in general. A review of user-developed applications can indicate the need for changes in the end-user support department and its services as well as in its system of controls. In addition, such a review can provide direction for strategic planning within the organization and is a valuable and helpful step in measuring the effectiveness of support policies and procedures. Auditors might also initiate an audit of user applications as part of a continuous improvement or total quality management program being undertaken in their organization. User managers or IS managers should also consider performing a review of user applications as a proactive step toward being able to justify the existence of the end-user support department. For example, a review may reveal that some user applications contribute heavily to increased pro0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

781

FACILITATING KNOWLEDGE WORK ductivity in the workplace. In an era marked by budget cuts and downsizing, it is always wise to be able to point out such triumphs. DEFINITIONS AND CHARACTERISTICS Each organization may use a different set of definitions to describe various aspects of end-user support, and it is important to have a common understanding of the terms to be used. The following definitions are used in this chapter. • Application. An application is a set of computer programs, data, and procedures that is used to resolve a business-specific problem. For example, the accounting department may develop and implement an application that generates profit-and-loss statements. • Product. Products are the software used to develop or assist in the development of computer systems. Examples are spreadsheets, word processors, fourth-generation languages (4GLs), or graphics packages. Tools is another common term for product. • System. A system is a combination of computer applications, processes, and deliverables. During a review of user applications, it is important to determine the fit of the application within the system. An example is a budget system in which individual managers collect data from individuals using a manual process (e.g., paper forms), transfer the data to spreadsheets, and then electronically transmit the information to the accounting department, which consolidates the information and uses a budget forecasting application to create reports for senior management. • Workgroup. This is a group that performs a common business function, independent of organizational boundaries, and is tied together by a system or process. The managers who collect data for the budget system make up a workgroup; they may all report to different managers in different departments, and each person is probably in more than one workgroup. Other workgroups include project development and training. • Work unit. This term is used for such organizational units as departments, divisions, or sections (e.g., accounting, human resources, and engineering). An application typically resides in a work unit (i.e., is run by that unit) but affects other units throughout the organization. It is also important to review the unique characteristics of the user computing environment as they relate to user-developed applications: • Point of control. In a user computing environment, the person using the application has either developed the application or is typically closer to the developer organizationally. • The critical nature of applications. User applications tend to be valued less and are often not developed under the strict guidelines of tradition782

Reviewing User-Developed Applications









al IS applications. Because of this, the impact of the application being in error or not working at all is often not considered until it is too late. Range of measuring criticality and value. User applications may range from trivial to mission critical. Applications created by IS have a much narrower range but are concentrated toward the critical end of the scale. Development. In a user computing environment, the people who handle any one application may be scattered organizationally; the applications may also be scattered over time and across products. For example, an application may originally be developed on a word processing package. If the math requirements for the application become too complicated, the application would be transferred to a spreadsheet product. Finally, the application may be converted to a database product to handle complex reporting requirements. Quantity of applications. There are more applications developed by users than by the IS department, but they are usually smaller in scope and more tuned to individual productivity. Type of products. User development products usually provide a group of standard functions (e.g., Excel provides built-in functions). Creating a complex application using these products may require a high degree of knowledge about the development product or may necessitate using several development products to create a single application.

OVERCOMING MISCONCEPTIONS In some organizations, senior IS management initiates a review on behalf of user managers who may be concerned that their applications are getting away from them. In these cases, the end-user support managers may be asked to help sell the idea to corporate managers. In most organizations, gaining any management commitment to reviewing user applications requires overcoming several obstacles. The following sections discuss common management objections to reviewing user applications and ways to overcome this mind-set. User Applications Are Not Significant This is a typical misconception on the part of either corporate managers or senior IS managers. User applications may be perceived as transient, disposable, not production oriented — and therefore not significant. Traditional applications that are run by the IS department and cannot be tampered with in any way by anyone other than a technical expert are viewed as much more substantial, stable, and worthwhile. Senior management may be unwilling to approve an investment in reviewing what they perceive to be insignificant applications. To change this viewpoint and bring them up-todate, user managers should make the effort to point out particular userdeveloped applications that are currently providing critical data or con783

FACILITATING KNOWLEDGE WORK tributing more concretely to improved productivity and increased bottomline benefits. Ease of Use Results in Effective Applications This is another common misconception of senior management and IS management. They may believe that the ease of use of application development products would prevent users from creating anything but the most effective applications. Again, managers would be reluctant to spend resources on reviewing applications that they feel are typically well created. This misconception has been amplified by sales promotions that vigorously emphasize the ease of use of these products. IS managers should point out to senior managers that development products have limitations and that ease of use not only cannot guarantee that applications do what they were intended to do but can contribute to users creating unnecessary applications and duplicating effort. The User Developers Will Not Cooperate with the Review This is a common objection of user management and their employees. IS managers should promote the concept of an informal reviewing method (e.g., an inventory or statistics review, both of which are discussed in a later section) that would be less of an imposition on users and therefore less of a threat to those users who are very protective of their current work processes. In some organizations, users react to a review of their applications in much the same way they would react to an audit of their personal finances by the IRS — that is, they view it as a hassle and something they would like to avoid at all costs, regardless of whether or not they feel they have anything to hide (e.g., pirated software). If this is the case, the IS department might want to consider setting up self-audit guidelines with the cooperation of the users, or have them participate in the first central review. Review guidelines explain what the review team will be looking for. When users know what to expect and have a chance to evaluate their own applications using the same criteria the reviewers will be using, they are typically far more willing to cooperate with the actual review. In addition, involving them in the review can alleviate an us versus them attitude. PREPARING FOR THE REVIEW The review process follows a life cycle similar to that of any other project. The steps discussed in this chapter cover preparation for a review; they provide the background necessary to begin a review. These steps are designed as a general guideline. Not all companies may need all the steps, and early reviews (undertaken when user development is still relatively 784

Reviewing User-Developed Applications new to the organization) will usually not follow all the steps. Preparing for a review requires: • Defining the review objectives • Defining the review method • Defining the scope and content of the review Each of these is discussed in the following sections and summarized as follows: Define the Review Objectives. The audit may be designed to:

• • • • •

Determine, identify, or resolve user applications problems. Evaluate end-user support services. Respond to financial issues. Collect specific information. Provide input to strategic or long-range planning.

Define the Review Method. Some of the most effective methods include:

• • • •

Formal audit Inventory Statistical review Best-guess review

Define the Scope and Content of the Review. Determining the scope and con-

tent helps: • Define what the end-user support department will consider as user computing. • Define which environments a particular review will evaluate. Define the Review Objectives Review objectives help determine the results and essentially guide the process by defining the intent of the review. IS and user managers should define and agree to the objectives before proceeding. In general, reviews are more successful if they focus on a particular objective. For example once it has been established that a review of user applications would be helpful or even necessary in a particular organization, careful preparation for conducting the review should begin. Although some more informal reviews may not require all the steps discussed in this chapter, for the most part, each step is an important and necessary component of a successful review (i.e., one that provides useful and valuable information). The following is a checklist of these steps: 785

FACILITATING KNOWLEDGE WORK • Determine, identify, or resolve user application problems. This common objective ensures that the review will provide answers to such questions as: — Is there a problem with user applications (e.g., are particular applications proving to be error-prone, duplicating effort, or providing inaccurate data)? — What is the exact problem with a particular application (e.g., why is the application providing inaccurate data)? — How can this problem be solved? For example, what can be done to make this application more effective, or should a better set of checks and balances be implemented to validate user applications? A better set of checks and balances might involve comparing the results of a user application that reports sales volume by region to the results of a traditional IS application that tracks the same information. — What are the consequences of ignoring the flaws in this application? — Who should fix this application? — Is it worth the cost to fix the application, or should use of the application be discontinued? For example, users might create an application that automates the extraction and compilation of sales data from a larger system. The cost of maintaining or repairing such a system could be prohibitive if the data from the larger system could just as easily be compiled using a calculator. • Evaluate end-user support services. When there are complaints from the user areas (e.g., users may feel that they are not getting enough support to develop effective applications) or when the support department takes on new levels of support, it may consider a review of user applications to help them evaluate current services. For example, such a review can reveal a large number of error-prone or ineffective applications, which would indicate a need for more development support. The review might reveal that a number of users are duplicating applications development effort or are sharing inaccurate data from one application. Users may have developed applications that are inappropriate or inadequate for solving the problems they were designed to address. Any of these scenarios would indicate an increased need for support of user applications development. Typical questions to be answered with this objective include: — Can the services be improved? — Should new services be added? — Should services be moved to or from another group of users? 786

Reviewing User-Developed Applications — Are resources being allocated effectively (e.g., is the marketing department the only user group without any productivity-increasing applications)? • Respond to financial issues. This objective can provide pertinent information if budget cuts or competition within IS for resources threatens the user computing support department. A review of userdeveloped applications may lend credence to the need for user computing support of the development and implementation of valuable computer applications by pointing out an application that may be saving a great deal of time and money in a particular user area. A review with this objective provides information similar to the answers provided when evaluating services in the objective; however, the information is then used to answer such questions as: — Can the user computing support group be reduced or eliminated? — Can the services to user developers be reduced? — Can some budgetary efficiencies be gained in supporting user application development? • Collect specific information. Corporate or IS management may request information about user applications, especially if they receive data from them on a regular basis or if (as in applications run in the payroll department) many people would be affected by an inaccurate application. It is also not unlikely that user management would request an investigation of user applications in their area. Both of these cases are more common in companies that are committed to a continuous improvement or total quality program. • Provide input to strategic or long-range planning. A review with this objective would highlight much of the same information found in a review to evaluate services or respond to financial issues but would add a more strategic element to the process. For example, this objective would answer such questions as: — Do user applications contribute to accomplishing corporate goals? — Are there user applications that might create strategic opportunities if implemented on a broader scale? — Are resources adequate to initiate or foster development of user applications that might eventually contribute to achieving strategic goals? Define the Review Method The methods of collecting data should be determined by the political climate, the people who will act on the results of the audit, and the resources available to perform the work. The following sections discuss five of the most common and effective methods for reviewing user applications and examine the most appropriate instances for using each of them. 787

FACILITATING KNOWLEDGE WORK Formal Audit. This method for auditing user applications is usually selected if the audit is requested by corporate management. They may be concerned about applications that are built to provide financial information or about the possibility of misconduct associated with user applications. Because most organizations are audited in a financial sense, corporate and user management are familiar with the process and the results of a less formal method. However, a formal audit is more expensive and often more upsetting to the participants (i.e., the users). Inventory. Taking an inventory of user applications involves gathering information about the products and applications on each workstation. Although an inventory is a less formal variation of an audit and may be perceived by corporate and senior IS management as less significant than a formal audit, it provides much of the same information as a formal audit. An inventory can be useful when the information will be used for improving the user environment; preparing for later, more formal audits, or providing feedback to management. The support department may initiate this type of review for purely informational purposes to increase support staff awareness of user applications development (e.g., the objective may simply be to determine the number of user applications or to evaluate their sophistication). Inventories can be done in less time and are less expensive than formal audits. In addition, they can easily be done by the IS department without consulting a professional auditor. An inventory is more low-key than a formal audit, and taking an inventory of applications is far less threatening to users. Statistical Review. Statistical reviewing involves collecting raw data from the help desk, support logs, computer transactions, or similar sources. This method of auditing is useful only if the support department generates a statistically significant amount of readily available data. This implies a large number of applications, a large number of users, and centralized support or centrally controlled computing resources (e.g., mainframes and local area networks). A statistical review is most appropriate when minor tuning of user computing services is the objective. This is an extensive process that provides enough information to confirm or deny perceptions or indicate the need to change; it has a product focus which can tell how many people are using Lotus 1-2-3 or how many are using WordPerfect. These statistics often come from LANs as users go through the network to access the product. This product focus does not provide much useful information for deciding how to change. Best-Guess Review. This is the most informal type of review. When time is a critical element, a best-guess review can be performed on the basis of the existing knowledge of the end-user support staff. This can even be classified as a review of the IS department’s impression of user applica788

Reviewing User-Developed Applications tions. Corporate or senior IS management may request a report on user applications within the organization. Such a review can be useful if support people and users are centralized and the support people are familiar with the users and their applications. The IS staff can also use the results to make changes within their limits of authority. Although a best-guess review does not gather significant unbiased data, it can be surprisingly useful just to get user computing staff impressions down on paper. Define the Scope and Content of the Review The scope defines the extent of the review and should also state specific limits of the review — that is, what is and is not to be accomplished. In most organizations and with most types of review, it may be helpful to involve users from a broad range of areas to participate in defining the scope. Knowledge of the review and involvement in the definition of the review scope by the users can be valuable in promoting their buy-in to the results. The review may be limited to particular products, environments, a type of user, a work unit, or a specific application or system. Determining the scope and content focuses on the appropriate applications. As part of defining the scope and content of the application it is necessary to determine the types of user environments to be audited. This definition of environment is used to: • Define what the IS department considers a user-developed application, that is, to determine whether or not a particular application will actually be considered a user-developed application. For example, in some organizations, a programmer’s use of a user product to create an application would be considered user computing and the application would be included in a review of user applications. In most companies, however, applications that should be included in a review of user applications come from the point-of-origin, shared work unit, and workgroup environments. Applications in a turnover environment can also be included because, although development may be done by another group, users work with the application on a daily basis. Each of these environment classifications is discussed at the end of this section. • Define which environments a particular review will evaluate. For example, the application developed by the programmer using a user development tool would fit in the distributed environment, which is also discussed at the end of this section. However, the review might be designed to investigate only applications developed in a point-of-origin environment. In each organization, user computing may consist of several environments. Each environment is defined by products, support, and resources. 789

FACILITATING KNOWLEDGE WORK Although there may be a few exceptions or hybrids, a user computing environment can usually fit into one of the general categories discussed in the following sections. Point-of-Origin Environment. In this environment, all functions are performed by the person who needs the application. This is how user computing began and is often the image management still has of it. These applications are generally developed to improve personal productivity. They are typically considered to be disposable — that is, instead of performing any significant maintenance, the applications are simply redeveloped. Redevelopment makes sense because new techniques or products can often make the applications more useful. Shared Environment. In a shared environment, original development of the application is performed by a person who needs the application. However, the application is then shared with other people within a work unit or workgroup. If any maintenance is done to the application, the new version is also distributed to the other users in the unit. Work Unit Environment. In this environment, applications development and maintenance are performed by people within an organizational unit to meet a need of or increase the productivity of the work unit as a whole (unlike point-of-origin and shared applications, which are developed for the individual). The applications are usually more sophisticated and designed to be more easily maintained. They may also be developed by someone whose job responsibilities include applications development. Workgroup Environment. In a workgroup environment, applications development and maintenance are performed by people within a workgroup for use by others in the workgroup. The developer is someone who has the time and ability to create an application that fulfills an informally identified need of the workgroup. In most cases, the application solves problems of duration (e.g., expediting the process), not effort (i.e., productivity). Turnover Environment. In this type of environment, applications are developed by one group and turned over to another group for maintenance and maybe to a third group for actual use. There are many combinations, but some common examples include:

• The application is developed by the end-user support group and turned over to a work unit for maintenance and use. This combination is popular during user computing start-up phases and during the implementation of a new product or technology. • The application is developed and used by the user but turned over to the end-user support department for maintenance. Distributed Environment. In a distributed environment, applications are developed and maintained by a work unit for use by others. The develop790

Reviewing User-Developed Applications ing work unit is responsible for development and maintenance only. They may report to a user group or indirectly to central IS. The development products may be traditional programming products or user computing products. Although this is not typically considered a user computing environment, in some organizations the work unit is the end-user support group. Centralized Development and Support Environment. In this environment, a programming group under the direct control of central information systems develops and maintains applications for use by users. Although centralized programming groups in some organizations may use user computing products for applications development, in general, applications developed in this environment are not reviewed with other user applications. Reseeded Environment. A common hybrid of environments occurs when a point-of-origin application becomes shared. In these instances, the people receiving the application typically fine-tune it for their particular jobs using their own product knowledge. This causes several versions of the original application to exist, tuned to each user’s expertise and needs. Maintenance of the original application is driven by having the expertise and time available to make alterations rather than by the need for such alterations. This reseeded application grows into another application that should be grouped and reviewed with point-of-origin applications. However, these applications should be reviewed to ensure that they are not duplicating effort. Fifteen to twenty applications may grow out of a single application. In many cases, one application customized for each user would suffice.

Determining the Application Environment During preparation for the review, the scope and content phase helps determine which user environments should be included. For example, the IS staff may decide that only point-of-origin applications will be included in a particular review. The first step in actually performing the review is to identify the environment to which particular applications belong. To do this, it is necessary to isolate who performed the functions associated with the life cycle of the individual application. These functions are: • Needs identification. Who decided something needed to be done, and what were the basic objectives of the application created to do that something? • Design. Who designed the processes, procedures, and appearance of the application? • Creation. Who created the technical parts of the application (e.g., spreadsheets, macros, programs, or data)? 791

FACILITATING KNOWLEDGE WORK • Implementation. Who decided when and how the implementations would proceed? • Use. Who actually uses the application? • Training. Who developed and implemented the training and education for the application? Typically, this is an informal and undocumented process — tutoring is the most common training method. • Maintenance. Who maintains the application? Who handles problem resolution, tunes the application, makes improvements to the application, connects the application to other applications, rewrites the application using different products, or clones the application into new applications? • Ongoing decision making. Who makes decisions about enhancements or replacements? The matrix in Exhibit 1 provides answers to these questions for each of the different user application environments. Evaluating Applications Development Controls This step in performing the review provides information about the controls in effect concerning user applications. It should address the following questions: • Who controls the development of the application? • Are there controls in place to decide what types of applications users can develop? • Are the controls enforced? Can they be enforced? Determining Application Criticality This checklist helps determine the critical level of specific applications: • Does the application create reports for anyone at or above the vicepresidential level? • Does the application handle money? Issue an invoice? Issue refunds? Collect or record payments? Transfer bank funds? • Does the application make financial decisions about stock investments or the timing of deposits or withdrawals? • Does the application participate in a production process; that is, does it: — Issue a policy, loan, or prescription? — Update inventory information? — Control distribution channels? • What is the size of the application? The larger the application (or group of applications that form a system), the more difficult it is to manage. 792

793

Reviewing User-Developed Applications

Exhibit 1. Environment-Function Matrix for User Applications

FACILITATING KNOWLEDGE WORK Determining the Level of Security This set of questions can help determine not only the level of security that already exists concerning user applications but also the level of security that is most appropriate to the particular applications being reviewed. The following questions pertain to physical security: • • • • •

Are devices, work areas, and data media locked? Is there public access to these areas during the day? Is the room locked at night? Is access to the area monitored? Is there a policy or some way to determine the level of security necessary?

The following questions relate to the security of data, programs, and input/output, and to general security: • • • • • •

How is data secured? By user? By work unit? By work area? By device? Is the data secured within the application? Who has access to the data? Is use of the programs controlled? Are data entry forms, reports, or graphs controlled, filed, or shredded? Is there some way to identify sensitive items?

Reviewing the Use and Availability of the Product Creating complex applications using user development products often requires more product knowledge than creating them using a comparable programming language. The user developer may go to great lengths to get around user product limitations when the application could probably be created more easily using traditional programming or a different tool. The questions in the following checklist help evaluate the appropriateness of the development products in use to create specific applications. • Are products being used appropriately? To answer this question, it is necessary to match user application needs to the tool used to create the application. This can help indicate the inappropriate use of development products (e.g., use of a spreadsheet as a word processor or a word processor as a database). • Is the user applying the product functions appropriately for the applications being developed or used? For example, a row and column function would not be the most effective function for an application designed to generate 10 or 15 reports using the same data but different layouts. As part of this step, the availability of user application development products should be assessed. The following questions address this issue. • Which products are available to this user? 794

Reviewing User-Developed Applications • Which of the available products are employed by this user? • Are these products targeted to this user? Reviewing User Capabilities and Development Product Knowledge This step in conducting a review of user applications focuses on the user’s ability to develop applications using a particular product and to select an appropriate development product for the application being created. The questions to answer include: • Is the user adequately trained in the use of the development product he or she is currently creating applications with? Is additional training necessary or available? • Does the user understand the development aspects of the product? • Is the user familiar with the process for developing applications? With development methodologies? With applications testing and maintenance guidelines? • Has the user determined and initiated or requested an appropriate level of support and backup for this application? • Is the user aware of the potential impact of failure of the application? • Are the development products being used by this user appropriate for the applications being developed? • If the user is maintaining the application, does that user possess sufficient knowledge of the product to perform maintenance? Reviewing User Management of Data Because the data collected using user applications is increasingly used to make high-level decisions within the organization, careful scrutiny of user management of that data is essential. The following questions address this important issue. • • • •

Is redundant data controlled? Is data sharing possible with this application? Who creates or alters the data from this application? Is data from traditional IS or mainframe systems — often called production data — updated by user applications or processes? If so, is the data controlled or verified? • If data is transformed from product to product (e.g., from spreadsheet to database) or from paper to electronic media, is it verified by a balancing procedure? • Are data dictionaries, common field names, data lengths, field descriptions, and definitions used? • Are numeric fields of different lengths passed from one application to another? 795

FACILITATING KNOWLEDGE WORK Reviewing the Applications This is obviously an important step in a review of user applications. The following questions focus on an evaluation of the applications themselves and assess problem resolution, backup, documentation, links, and audit trails associated with these applications. • Problem resolution: — Is there a mechanism in place to recognize whether or not an application has a problem? — Is there an established procedure for reporting application problems? — Is there a formal process in place to determine what that problem may be or to resolve or correct the problem? — Are these procedures being followed? • Backup: — Is the application backed up? — Is the data backed up? — Are the reports backed up? — Is there a backup person capable of performing the activities on the application? — Are backup procedures in effect for support, development, and maintenance of the application? • Documentation: —What documentation is required for the application? Is the application critical enough to require extensive documentation? Is the application somewhat critical and therefore deserving of at least some documentation? Is the application a personal productivity enhancer for a small task and therefore deserving of only informal or no documentation? — If documentation guidelines are in place, are they being followed? — How is the documentation maintained, stored, and updated? • Links: — How are the data, programs, processes, input, output, and people associated with this application connected? — What is received by the application? — Where does the application send data, information, knowledge, and decisions? — Are these links documented? • Audit trail: — Are the results of this application verified or cross-checked with other results? — Who is notified if the results of the application cannot be verified by other results? 796

Reviewing User-Developed Applications GUIDELINES FOR IMPROVING USER APPLICATION DEVELOPMENT Reviewing user applications requires that some resources (i.e., time and money) be spent. In most companies, these resources are scarce; what resources are available are often sought after by more than one group. A review of user applications is often low on senior management’s priority list. Reducing the time it takes to collect information can greatly improve the IS department’s chances of gaining approval for the review. However, reducing the need to collect information can decrease the need to conduct a review at all. This can be done by setting up and enforcing adherence to user application development guidelines. It is a cost-effective way to improve the user application development environment and help conserve limited resources. To begin, general guidelines should be created and distributed before a planned review is started. The effectiveness of the guidelines can then be evaluated. The following checklist outlines some areas in which guidelines established by the IS department can improve user application development: • Use of user development products. Users should be provided with a set of hypothetical examples of appropriate and inappropriate uses of development products (i.e., which products should be used to develop which types of applications). • Documentation. A checklist or matrix of situations and the appropriate documentation for each should be developed. This could also include who will review an application and whether review of the application is required or optional. • Support for design and development. A quick-reference document of functions or types of problems supported by various groups can be distributed to user developers. • Responsibility and authority. A list of responsibilities should be distributed that clearly states who owns the application, who owns the data, and who owns problem resolution. • Corporate computing policy. Corporate policies regarding illegal software, freeware or shareware, and security issues should be made available to user developers. One tactic to improve the quality of user application development while avoiding some of the costs in time and money of a full-fledged review is to set up workgroup auditors. These people may report to a corporate auditing group or to the IS department on a regular basis concerning user application development. This is particularly effective with remote users. CONCLUSION The increase in the number of users developing complex applications and the corresponding reliance of decision makers at all levels of the organiza797

FACILITATING KNOWLEDGE WORK tion on the data produced by these applications make a careful evaluation of the applications a necessary endeavor. In the current user environment, a failed application can seriously damage the business of the organization. To ensure that a review meets the objectives set out for it, IS managers must carefully plan the details of each aspect of the review. This chapter outlines the steps that should be taken before and during an actual review of user applications.

798

Chapter 63

Security Actions during Reduction in Workforce Efforts: What To Do When Downsizing Thomas J. Bray

Today, companies of every size are relying on Internet and other network connections to support their business. For each of those businesses, information and network security have become increasingly important. Yet, achieving a security level that will adequately protect a business is a difficult task because information security is a multifaceted undertaking. A successful information security program is a continuous improvement project involving people, processes, and technology, all working in unison. Companies are especially vulnerable to security breaches when significant changes occur, such as a reduction in workforce. Mischievous individuals and thieves thrive on chaos. Companies need even more diligence in their security effort when executing a reduction in workforce initiative. Security is an essential element of the downsizing effort. EVEN IN GOOD TIMES In good times, organizations quickly and easily supply new employees with access to the computer and network systems they need to perform their jobs. A new employee is a valuable asset that must be made productive as soon as possible. Computer and network administrators are under pres-

0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

799

FACILITATING KNOWLEDGE WORK sure to create accounts quickly for the new hires. In many instances, employees may have more access than they truly need. The justification for this, however misguided, is that “it speeds up the process.” When an employee leaves the company, especially when the departure occurs on good terms, server and network administrators tend to proceed more slowly. Unfortunately, the same lack of urgency exists when an employee departure is not on good terms or a reduction in the workforce occurs. DISGRUNTLED EMPLOYEES Preparing for the backlash of a disgruntled employee is vital during an employee layoff. Horror stories already exist, including one about an exemployee who triggered computer viruses that resulted in the deletion of sales commission records. In another company, an ex-employee used his dial-up access to the company network to copy a propriety software program worth millions of dollars. A June 26, 2001, article in Business Week sounded an alarm of concern.1 The biggest threat to a company’s information assets can be the trusted insiders. This is one of the first concepts learned by information security professionals, a concept substantiated on several occasions by surveys conducted by the Computer Security Institute (CSI) and the Federal Bureau of Investigation (FBI). The market research firm Digital Research recently conducted a survey for security software developer Camelot and eWeek magazine. They found that, “Insiders pose the greatest computer security threat. Disgruntled insiders and accounts held by former employees are a greater computer security threat to U.S. companies than outside hackers.” Out of 548 survey respondents, 43 percent indicated that security breaches were caused by user accounts being left open after employees had left the company. 2 YEAH, RIGHT. WHAT ARE THE CASES? In many cases of ex-employees doing harm to their former employers, the extent of the problem is difficult to quantify. Some companies do not initially detect many of the incidents, and others prefer to handle the incidents outside the legal system. A small percentage of incidents have gone through the legal system and, in some cases, the laws were upheld. Each time this occurs, it strengthens support for the implementation of information security best practices. Although many states have computer crime laws, there is still only a small percentage of case law. 800

Security Actions during Reduction in Workforce Efforts Example Incident: The Boston Globe, by Stephanie Stoughton, June 19, 20013 Ex-tech worker gets jail term in hacking. A New Hampshire man who broke into his former employer’s computer network, deleted hundreds of files, and shipped fake e-mails to clients was sentenced yesterday to six months in federal prison. U.S. District Judge Joseph DiClerico also ordered Patrick McKenna, 28, to pay $13,614.11 in restitution to Bricsnet’s offices in Portsmouth, N.H. Following McKenna’s release from prison, he will be under supervision for two years.

HIGH-TECH MEASURES E-Mail E-mail is one of the most powerful business tools in use today. It can also be a source of communications abuse and information leakage during a downsizing effort. The retention or destruction of stored e-mail messages of ex-employees must also be considered. Abuse Do not allow former employees to keep e-mail or remote access privileges in an attempt to ease the pain of losing their jobs or help in their job searches. The exposure here is the possibility of misrepresentation and inappropriate or damaging messages being received by employees, clients, or business partners. If the company wants to provide e-mail as a courtesy service to exiting employees, the company should use a third party to provide these services. Using a third party will prevent employees from using existing group lists and addresses from their address books, thus limiting the number of recipients of their messages. Employees who know they are to be terminated typically use e-mail to move documents outside the organization. The company’s termination strategy should include a method for minimizing the impact of confidential information escaping via the e-mail system. E-mail content filters and filesize limitations can help mitigate the volume of knowledge and intellectual capital that leaves the organization via e-mail. Leakage E-mail groups are very effective when periodic communication to a specific team is needed. The management of the e-mail group lists is a job that requires diligence. If ex-employees remain on e-mail group lists, they will continue to receive company insider information. This is another reason the company should not let former employees keep company e-mail accounts active as a courtesy service. 801

FACILITATING KNOWLEDGE WORK Storage E-mail messages of ex-employees are stored on the desktop system and the backup disk or tapes of the e-mail server. The disposal of these documents should follow the company’s procedure for e-mail document retention. In the absence of an e-mail document retention policy, the downsizing team should develop a process for determining which e-mail messages and attachments will be retained and which will be destroyed. LOW-TECH MEASURES The fact that information security is largely a people issue is demonstrated during a reduction in force initiative. It is the business people working hand in hand with the people staffing the technical and physical security controls who will ensure that the company is less vulnerable to security breaches during this very disruptive time in the company. Document Destruction As people exit the company during a downsizing effort, mounds of paper will be thrown in the trash or placed in the recycling bin. Ensuring that confidential paper documents are properly disposed of is important in reducing information leaks to unwanted sources. After one company’s downsizing effort, I combed through their trash and recycling bins. During this exercise, I found in the trash several copies of the internal company memo from the CEO that explained the downsizing plan. The document was labeled “Company Confidential — Not for distribution outside of the company.” This document would have been valuable to the news media or a competitor. All companies have documents that are confidential to the business; however, most companies do not have a document classification policy. Such a policy would define the classification designations, such as: • • • •

Internal Use Only Confidential Customer Confidential Highly Restricted

Each of these classifications has corresponding handling instructions defining the care to be taken when storing or routing the documents. Such handling instructions would include destroying documents by shredding them when they are no longer needed. Many organizations have also been entrusted with confidential documents of business partners and suppliers. The company has a custodial responsibility for these third-party documents. Sorting through paper doc802

Security Actions during Reduction in Workforce Efforts uments that are confidential to the company or business partners and seeing that they are properly destroyed are essential to the information protection objective. SECURITY AWARENESS Security awareness is a training effort designed to raise the security consciousness of employees (see Exhibit 1). The employees who remain with the organization after the downsizing effort must be persuaded to rally around the company’s security goals and heightened security posture. Providing the remaining team of employees with the knowledge required to protect the company’s vital information assets is paramount. Employees should leave the security training with a mission to be security-aware as they perform their daily work. Some of the topics to be covered in the security awareness sessions include: • Recognizing social engineering scenarios • Speaking with the press • Keeping computer and network access credentials such as passwords, confidential • Changing keys and combinations • Encouraging system administrators and security administrators to be vigilant when reviewing system and security logs for suspicious activity • Combining heightened computer and network security alertness with heightened physical security alertness CONCLUSION Information security involves people, processes, and technical controls. Information security requires attention to detail and vigilance because it is a continuous improvement project. This becomes especially important when companies embark on a downsizing project. Companies should always be mindful that achieving 100 percent security is impossible. Mitigating risk to levels that are acceptable to the business is the most effective methodology for protecting the company’s information assets and the network systems. Businesses need to involve all employees in the security effort to have an effective security program. Security is most effective when it is integrated into the company culture. This is why security awareness training is so important. Technology plays a crucial role in security once the policies and processes have been defined to ensure that people properly manage the technological controls being deployed. A poorly configured firewall provides a 803

FACILITATING KNOWLEDGE WORK Exhibit 1. Checklist of Security Actions during Reduction in Workforce Effort *HQHUDO Assemble a team to define the process for eliminating all computer and network access of downsized employees. The team should include representation from Human Resources, Legal, Audit, and Information Security. Ensure that the process requires managers to notify the employees responsible for information security and the Human Resources department at the same time. Educate remaining employees about information security company policy or best practices. Change passwords of all employees, especially employees with security administrative privileges. Check the computer and laptop inventory list and ensure that downsized employees return all computer equipment that was issued to them as employees. Be current with your software licenses — ex-employees have been known to report companies to the Software Piracy Association. 6HQLRU0DQDJHUV Explain the need for the downsizing. Persuade key personnel that they are vital to the business. Resist the temptation to allow downsized officers, senior managers, or any employees to keep e-mail and remote access privileges to ease the pain or help in their job search. If the company wants to provide courtesy services to exiting employees, the company should use a third party to provide these services, not the company’s resources. 6HUYHU$GPLQLVWUDWRUV1HWZRUN$GPLQLVWUDWRUVDQG6HFXULW\$GPLQLVWUDWRUV Identify all instances of employee access: — Scan access control systems for IDs or accounts of downsized employees. — Scan remote access systems for IDs or accounts of downsized employees. — Call business partners and vendors for employee authorizations. Consult with departing employee management: — Determine who will take on the exiting employee’s access. — Determine who will take control of exiting employee’s files. (PDLO6\VWHP$GPLQLVWUDWRUV Identify all instances of employee access: — Scan the e-mail systems for IDs or accounts of downsized employees. Forward inbound e-mail messages sent to an ex-employee’s e-mail account to their manager. Create a professional process for responding to individuals who have sent e-mails to exemployees, with special emphasis on the mail messages from customers requiring special care. Remove ex-employees from e-mail group lists. 0DQDJHUVRI([LWLQJ(PSOR\HHV Determine who will take on the access for the exiting employees. Determine who will take control of exiting employee computer files. Sort through exiting employee paper files for documents that are confidential or sensitive to the business. 3UHSDUHIRUWKH:RUVW Develop a list of likely worst-case scenarios. Develop actions that will be taken when worst-case scenarios occur.

804

Security Actions during Reduction in Workforce Efforts false sense of security. This is why proper management of security technologies provides for a better information protection program. Notes 1. http://www.businessweek.com, June 26, 2001. 2. http://www.usatoday.com, June 20, 2001. http://www.cnn.com, June 20, 2001. 3. http://www.boston.com.

805

This page intentionally left blank

Chapter 64

Supporting Telework: Obstacles and Solutions Heikki Topi

Since the industrial revolution, most organizations have been built around the model of bringing employees to centralized locations to perform work with a group of peers under the immediate supervision and control of management. For almost three centuries, the general assumption among both managers and their employees has been that the employer assigns a place where the employee performs his or her work. Today, advances in telecommunications technology and transportation have freed many workers from the traditional model of a fixed place of work in two major ways. First, many Americans have become telecommuters — spending at least a part of their regular business hours either in home offices, satellite offices or neighborhood work centers close to their homes, at customer sites, or on the road. Second, it has become increasingly common for work to be performed by virtual teams — where the membership of the team is not limited by the physical location of an employee’s primary workplace or a team member’s functional unit within the organization. Research and practical experiences from a large number of organizations have shown that various types of teleworking arrangements are advantageous for both organizations and their employees.1,2 When implemented well, telework arrangements can yield significant cost savings and the flexibility to put together the best possible teams for various projects. Telework arrangements have also been associated with increased individual worker productivity and increased flexibility for employees to achieve their personal goals for quality of work life by removing some of the constraints of strictly defined place and time of work. Current estimates are that about 20 million Americans are telecommuting either full- or part-time, and the number is projected to grow to 40 mil-

0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

807

FACILITATING KNOWLEDGE WORK lion by 2010.3 Nevertheless, to avoid the overly optimistic predictions of the recent past, a clear assessment of the potential obstacles to achieving the benefits of teleworking in a given organization setting is an important first step. This chapter begins with a discussion of common obstacles that organizations and employees face when implementing telework arrangements. Then, both technological and managerial solutions to alleviate these obstacles are described in some detail. OBSTACLES OF TELEWORK ARRANGEMENTS Although new personal computing devices and an increasingly technologysavvy workforce have made some of the traditional obstacles to telework obsolete, successful telework initiatives still require significant IT management planning and oversight. Because the extent to which obstacles exist in a given organization will depend on the organization itself and the way it competes in its industry, an important first step is to carefully assess the potential pitfalls. Task- and Resource-Related Obstacles On many occasions, work must be performed at a specific location because the physical objects being processed are there. Most manufacturing and construction jobs fall into this category; and with them, in most cases it is either practically impossible or economically unjustifiable to divide the work between a large number of sites. Thus, the location of work is determined by the need to bring a team of workers together to a place that is suitable for the manufacturing operation or happens to be the site under construction. The location of work is also often constrained by the need for specialized equipment or other resources. This is obvious in manufacturing; in most cases, the work is where the machinery is located. Also, in many industries, research and development work requires specialized laboratory equipment. For office workers, the location of work is often determined by the location of the documents that employees manipulate in their job; much of office work still involves processing data on paper forms. Many times, work takes place in a location dictated by access to physical archives of either organization-specific data or general knowledge. In many service jobs, the work must be performed where customers are if they value face-to-face contact. Not only customers, but also suppliers, financial institutions, government agencies, and other stakeholders assume that most organizations have a stable physical location where at least some of the employees can be found. However, in the past few years we have learned that the traditional model of attracting a customer to a specific physical location need not 808

Supporting Telework: Obstacles and Solutions always hold true. Although a physical location for services such as hotels and restaurants, as well as some types of retail activities, is still likely to be desired in the future, end consumers have turned to the Internet for 24/7 convenience and sometimes for lower costs for many other services. This includes retail sales, banking, and even repair services — as well as for a vast amount of information without direct people contact. That is, information technology can connect customers to virtual retail outlets or product experts and can make formerly location-specific resources (such as information sources and paper forms) available through networks, not a physical place. Management-Related Obstacles Many organizations require employees to perform their jobs in a specific location even if the employees are not manipulating tangible objects in their jobs and they need no tangible resources to get their jobs done. This section explores the reasons underlying these requirements. Perceived Performance Advantages. Often, management’s perception is that the aggregated performance of all the employees working together in a specific location is higher than the sum of individual performances would be if employees worked separately in different locations; that is, co-location per se will create clearly beneficial synergies. People are brought together to work in the same location because management assumes that the support that employees give to each other improves their joint performance. The assumption is that co-location enables unplanned informal meetings, quick answers to unanticipated questions, idea generation, and problem-solving sessions.

It is important, however, to note that with advantages come disadvantages. Disruptions and interruptions are all too typical in office environments, and can result in a lack of concentration; these include background noise, events unrelated to work being performed, and unnecessary and unproductive planned and unplanned meetings. Whether or not continuous co-location of a team is truly beneficial depends on several factors, such as the nature of the project and tasks, the stage of the project, the number of people involved, the cohesiveness of the project team, team members’ abilities to work together under different work environments, the nature of the facilities allocated to the team, and the alternative communication mechanisms available. No organization should automatically assume that one arrangement is always better than another. Flexibility and a willingness to consider the best possible alternative for a particular project and organization at a particular project stage is the key. Managers Want to Manage Face-to-Face. Many organizations and individual managers still feel they need their employees in a location where “man809

FACILITATING KNOWLEDGE WORK agement by walking around” or “management by example” philosophies can be literally applied, and where work behavior can be directly observed. Many managers are still uncomfortable with the idea of being responsible for a team if they are not able to be in face-to-face contact with team members on a regular basis. Unfortunately, the problem is often the lack of trust between team members. Many managers still have an expectation that they should be able to control employees’ contributions to the goals of the team and the organization by observing and directly guiding their behavior. This is especially true in situations when the evaluation of results is difficult or the risk of failure is high. Employees Want to be Managed Face-to-Face. On the other hand, employees often feel the need to be managed face-to-face. Some want to show with their behavior that they are loyal and useful contributors to the organization’s goals. Some believe that promotions go to those who are visible. Some believe that in a downsizing environment, it is easier to lay off an employee who is not physically present. Others want someone else to help organize their work for them or feel safer if they have the option of getting face-to-face advice when confronting a difficult decision. The needs to manage and be managed face-to-face are strong reasons underlying the tenacity of traditional work arrangements. Naturally, an important question is the validity of these needs in situations where it is possible to use modern communication technologies to implement mechanisms that would lead to the same results as traditional management techniques that require physical vicinity. Of course, this requires that employee performance be evaluated not on their visible behavior, but on their contributions to the organization’s goals — that is, their results. Social Contacts and Support. Organizations often bring their employees together into one or several locations because this is what their employees want. For many, the physical workplace is an important social environment; not all employees want to work outside the workplace, including at home. Fears of social isolation and a “life” without the variety of human contacts offered by traditional work arrangements are valid and significant factors that can affect the models of work arrangements that an organization is able to utilize. This is linked to the need to be accepted as a valuable member of an organization by managers and co-workers, as well as to the way social relations at the workplace often become a natural social network that supplements an extended family and circle of personal friends. For many, it is essential to be able to leave the home on a regular basis and enter a different social and physical sphere. Some employees also feel that the support given by co-workers in the immediate proximity is valuable and helps them perform their tasks better. A question or other request for help is perceived to be less intrusive and more effective if presented in person than if presented using a communication technology such as phone, 810

Supporting Telework: Obstacles and Solutions videoconferencing, or e-mail. Also, in an environment where employees work in close physical proximity, it is often easier to find support for complex or in other ways difficult decision making. REMOVING OBSTACLES WITH TECHNOLOGY This section discusses a variety of technological tools and arrangements that can be used to enable and support telework arrangements. Both the tools and supporting their usage are discussed because it is not sufficient to make technological resources available to employees; effective telework arrangements also require high-quality technology support. Note that in a subsequent section we discuss a variety of managerial actions to remove obstacles; even well-supported technology is not enough if other aspects of the work environment are not effectively implemented. Technology Resources Virtual work arrangements are made possible by communication and computing technology. A well-functioning, efficient telecommunications infrastructure for voice, video, and data is one of the first requirements for a technology environment to support telework in its various forms. In voice communication, the key characteristic of a support system for virtual workers is flexibility: the phone system of an enterprise should be able to connect a phone call to an employee with one number independent of his or her location, whether it is within company facilities, in the home office, or on-the-road with a mobile phone. Increasingly, a mobile phone with global reach is all that is needed for voice communication. In data communication, it is very important that virtual workers have access to all the same services that are available to employees with a permanent office, although not always at the same speeds. Virtual private networks (VPN) and other remote access technologies provide seamless, location-independent access to corporate data resources. The importance of sufficient bandwidth cannot be overemphasized. Any connection that is made regularly from the same location (as, for example, in telecommuting), broadband connections using xDSL, cable modems, or satellite connections are the norm, and 128-kbps ISDN should be the lowest acceptable capacity if broadband is not available. It does not make sense to lower the productivity of a highly paid professional with low bandwidth or an unreliable connection if a faster and more reliable option is available at a marginal monthly cost of only $50 to $100. For mobile workers, the increasing availability of WLAN connections at airports, hotels, and restaurants/coffee shops, and 2.5/3G connectivity in densely populated areas, provide the opportunity to access corporate data resources regardless of the location. Further, e-mail, instant messaging, and access to core corporate systems are increasingly available on mobile phones and personal digital assistants 811

FACILITATING KNOWLEDGE WORK (PDAs); in many cases, these capabilities are all a mobile worker needs in addition to voice communication. The promise of videoconferencing has not yet been widely fulfilled except in large organizations, mostly because of the lack of sufficient bandwidth. Fortunately, Internet-based videoconferencing solutions are becoming increasingly useful, and bandwidth costs for the still widely utilized ISDN-based systems continue to decrease. Even the high-end 384-kbps systems are currently affordable to use. Furthermore, continuously developing telecommunications technologies, such as increasingly fast fixed-line Internet connections and third-generation wireless devices, provide significantly higher data rates, better security, and improved availability compared to similar technologies that are currently in widespread use. An organization that wants to utilize the opportunities offered by telework arrangements today can build a state-ofthe-art telecommunications infrastructure to support work from home offices, satellite offices, customer sites, as well as on-the-road. Technology Support Providing the best of technology to employees who are doing their work outside the traditional work environment will not be sufficient if they are not trained and both able and willing to learn on their own to use the technology. Virtual workers are often far away from the traditional organizational support system and, therefore, need to have stronger technical problem-solving skills than peers who are closer to the reach of the organized support. These issues were often ignored in the early days of telecommuting when employees interested in working at home were, in many cases, technology experts themselves. Even in today’s environment, the need for technological survival skills is still high enough to warrant a special training course for employees who are starting to telecommute or become members of virtual teams. The content of training should not only include standard software, but also issues related to the telecommunications solutions being adopted. The more dependent a teleworker is on the resources available on a network, the more important it is that the training enables the employee to troubleshoot and independently solve at least the most typical simple network problems. However good the training, the importance of an excellent support structure cannot be overemphasized. Both telecommuters and virtual team members have their own special computing and telecommunications needs, and sufficient support should be available to address the relevant issues. For members of virtual teams, the most essential issue is the creation and maintenance of a proper environment for sharing information, an environment that allows efficient and effective file sharing and electronic 812

Supporting Telework: Obstacles and Solutions conferencing among team members, wherever they are located. For telecommuters, it is vitally important that support personnel are easily accessible by redundant communication channels and that they are able to provide help both with traditional software problems and telecommunications issues specific to telecommuters. The support organization should have the necessary expertise to help all teleworkers and teams choose the best telecommunications solution from the set of available solutions. REMOVING OBSTACLES WITH MANAGERIAL ACTIONS The technology-based solutions reviewed above are not, however, all that is needed. Especially for managers and knowledge workers focusing on technology-related projects, it is easy to understand the technical solutions and to attempt to apply them to all problems. Technical solutions alone will, however, fail to bring the results that can be achieved by choosing a balanced approach that integrates appropriate technologies with carefully selected managerial tools. Task Support One of the problems with telework is the real or perceived lack of support for a variety of work-related tasks. Many employees find it problematic if they are not able to turn to their immediate co-workers or supervisors to ask help with difficult decisions or problems requiring specialized knowledge that only few in the organization have. It appears that the problem is that of gaining somebody’s immediate attention. Any teleworker has a variety of telecommunications media (e.g., phone, videoconferencing, electronic conferencing, e-mail, instant messaging) for contacting the same coworkers they would be approaching to ask for help in a face-to-face setting. However, every one of these media is easier to ignore than a direct personto-person contact attempt in an office setting. To alleviate the concerns regarding the lack of task support in a virtual work environment, clear organizational task support mechanisms should be available. First, either messaging and groupware tools such as Lotus Notes or other intranet technologies should be used appropriately to maintain organizational memory in the form of questions and answers or problems and their solutions. Second, teams and departments where at least some of the employees are working virtually should explicitly acknowledge the task support needs of the employees outside the permanent location and give them a priority when appropriate; this requires conscious effort, especially from those working on the company premises. Third, whenever possible, regular face-to-face meetings with telecommuting employees also present should be a part of the workplan for all project teams and workgroups. Among other benefits, these meetings offer excellent opportunities for developing interpersonal relationships. Fourth, in many situations it is 813

FACILITATING KNOWLEDGE WORK good if the entire group or department learns to use asynchronous media (such as e-mail or electronic conferencing) for questions and answers that do not require an immediate response. An additional benefit is that these media provide an opportunity to store the questions and answers as part of the organizational memory. Whatever the technical implementation chosen, it is important that the task support needs of the employees working in the virtual environment are taken into account and acknowledged in a way that reduces perceptions of a lack of task support. Modified Reward Mechanisms Successful telework requires organizational reward mechanisms that are adapted to suit the new models. If work behavior cannot be directly observed, it should not be a basis for evaluation. This is, however, a tradition that is very difficult to change. Especially in cases when direct results of an employee’s work do not warrant a positive evaluation, managers often tend to rely on their impressions based on observations of work behavior. If an employee has successfully created the impression of hard work and strong dedication, it is much easier for a manager to attribute the unsatisfactory results to external causes that were not under control of the employee. If an employee is not co-located with the manager who performs the evaluation, impressions of dedication and diligence in virtual environments can be created, but it is much more difficult. To a certain extent, visibility in the communication channels of the virtual world is also used as a behavioral criterion. For example, work-related e-mail or electronic conferencing postings in the middle of the night could create a positive image of hard work in some cases. Organizations should, however, find ways to move toward an evaluation model that, in a fair and equitable way, values contributions to the organization’s goals, not the number of hours spent or some other similar activity-based measures. In many organizations, this is a clear cultural change, which is never an easy process, and it requires conscious effort by the management. Extra effort is needed to ensure that promotion decisions are fair and perceived to be fair. Virtual workers can aid this process by ensuring that the results of their work are not hidden and that their supervisors understand what they are achieving. In addition, the external signs of rewards should be available and visible in the virtual world if there are corresponding signs in the real world. For example, “Employee of the Month”-type recognitions should be shown not only on the wall in company headquarters but also be clearly visible on the corporate intranet. If privileges that are only useful at the company pre814

Supporting Telework: Obstacles and Solutions mises (such as special parking or access to special facilities) are used as reward mechanisms, some corresponding rewards should be developed for those operating in the virtual world. If the top management of an organization wants to support and encourage virtual work models, it should express and communicate this explicitly to all employees using channels that are available also to those who are not physically present at the company locations. The best support is supportby-example, which requires that top management is also able and willing to use the communication channels typically utilized by virtual workers. Organizations with top managers who choose to telework in the evenings or weekends may also have leaders who are more attuned to the social trade-offs associated with virtual work arrangements. Top management also needs to express the organization’s commitment to create and maintain a reward system that is fair to everybody. This is because virtual work arrangements may create concerns regarding fairness among both those who participate in them and those who do not; achieving success with these arrangements requires that these concerns are visibly addressed. Maintaining and Enhancing Organizational Identity Employees want to identify with the organization for which they are working, and it is important that teleworkers have sufficient opportunity to strengthen their organizational identities. On the one hand, this requires that organizational networks (particularly intranets) include information and symbolism that aid all employees with identification, such as statements regarding corporate values, mission, basic objectives, history, future, consistent use of corporate colors and logos, or examples of achievements by the company and its employees. On the other hand, it is important that all employees without a strong permanent home within the organization are regularly brought together at social meetings in which they can learn to know their co-workers better, in a relaxed face-to-face setting, and to identify better with the entire organization. One of the best ways to create and maintain strong identification with an organization is to create an atmosphere of trust in which every employee can feel that he or she is trusted to fully contribute toward the organization’s goals without continuous observation and control by management. These feelings can be nurtured by explicit statements and by other visible signs of trust, but it is even more important that employees believe that management appreciates and accepts the telework arrangements the employees have chosen. Responding to Other Social Needs Organizations willing to utilize virtual work arrangements successfully should find mechanisms to respond to employees’ social needs. For many 815

FACILITATING KNOWLEDGE WORK employees, work is a justification to leave home regularly and meet other adults. Virtual teams, work at customer sites, and satellite offices fulfill this need; but for many teleworkers who work at home, the advantages of freedom and flexibility are significantly challenged by the long hours alone without the opportunity to stop by a colleague’s office or have a lunch break together. Also, the lack of face-to-face social contacts supports the feeling of being “out-of-the-loop” and not part of a core circle of employees. One solution to this problem is to make sure that all employees, however virtual their normal work arrangements might be, regularly attend face-to-face meetings with both their supervisors and their co-workers. Also, if possible and mutually agreeable, one option is to implement parttime telework arrangements so that even telecommuters spend one or two days per week at central or satellite office locations. Naturally, if this option is chosen, logistical arrangements are necessary to make sure that cost savings related to the reduction of office and parking space are not entirely lost. The developments in telecommunications technologies — especially relatively inexpensive, high bandwidth solutions with a flat-fee pricing structure such as xDSL and cable modems — make it possible to use a rich variety of communication options (including videoconferencing over IP networks) continuously also for non-business-related purposes. It is important to remember that active utilization of modern telecommunications technologies alleviates to a certain extent, but not fully, the feelings of being alone and far away from the center of action. CONCLUSION Many organizations have identified good reasons to make telework possible for their employees. The benefits include increased flexibility and cost reductions for both individuals and organizations. However, many organizations are struggling to make these virtual work arrangements successful and widely available. The technical opportunities for teleworking continue to evolve. Advances in wireless communication technologies, for example, are expected to have a strong impact. Even with future technologies, the benefits of telework will be elusive if due attention is not given to the careful design of work arrangements that fulfill the fundamental needs of both organizations and their employees. The two key lessons learned are: 1. Managers need to carefully evaluate the feasibility and potential obstacles of telework for a given organization and given type of work. 2. The success of virtual work arrangements cannot be guaranteed with technological solutions only. In most cases, strong and visible managerial interventions are also necessary. 816

Supporting Telework: Obstacles and Solutions Notes 1. Agpar, M., IV, “The Alternative Workplace: Changing Where and How People Work,” Harvard Business Review, 76, 3, May/June 1998. 2. Davenport, T.H., and Pearlson, K., “Two Cheers for the Virtual Office,” Sloan Management Review, 39, 4, Summer 1998. 3. Venkatesh, V. and Johnson, P., “Telecommuting Technology Implementations: A Withinand-Between-Subjects Longitudinal Field Study,” Personnel Psychology, 55, 3, Autumn 2002.

Additional References Anonymous, “Transcend the Top Ten Telecommuting Traps,” Byte, 23, 7, July 1998, pp.28–29. Bailey, D., “A Review of Telework Research: Findings, New Directions, and Lessons for the Study of Modern Work,” Journal of Organizational Behavior, 23, 4, June 2002. Berlanger, F., “Technology Requirements and Work Group Communications for Telecommuters,” Information Systems Research, 12, 2, June 2001. Berlanger, F. and Collins, R.W., “Distributed Work Arrangements: A Research Framework,” Information Society, 14, 2, April–June 1998. Bresnahan, J., “Why Telework?,” CIO, 11, 7, Jan. 15, 1998. Hill, E.J., Miller, B.C., Weiner, S.P., and Colihan, J., “Influences of the Virtual Office on Aspects of Work and Work/Life Balance,” Personnel Psychology, 51, 3, Autumn 1998. Jala International, “Telecommuting Forecast,” http://www.jala.com, accessed on November 7, 2002.

817

This page intentionally left blank

Chapter 65

Virtual Teams: The Cross-Cultural Dimension Anne P. Massey V. Ramesh

The trend toward greater distribution of an organization’s operations has been evident for decades. Today’s network-enabled organizations — LANs, WANs; the Internet, intranets, and extranets; and virtual private network connections — allow workers to overcome boundaries of space and time. In concert, laptops and new mobile devices are extending the workplace to anywhere a knowledge worker might be. The notion of teamwork by geographically dispersed or “virtual” teams has also become more common, especially in the face of tighter corporate travel budgets.1,2 Virtual teams, supported by collaboration technologies such as e-mail, synchronous messaging systems, groupware (e.g., Lotus Notes), and real-time conferencing (e.g., NetMeeting), thus hold the promise of flexibility and responsiveness, as well as lower costs and improved resource utilization that can impact the organization’s bottom line. The reality is, however, that technology to compensate for space and time differences does not completely capture the support needs of today’s virtual teams. Virtual teams are not only separated by space and time, but also often by culture. Cultural differences may occur between different functions and divisions within the same organization. For teams that include business partners or team members with different countries-of-origin, the cultural differences can be especially salient. This is because culture is a boundary condition for all interpersonal communications. It affects the way people communicate, because people generally behave according to culturally learned norms, rules, and values.3,4 A key question for managers, then, is how technology-enabled virtual teams that exchange knowledge across space, time, and culture can be effectively supported. 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

819

FACILITATING KNOWLEDGE WORK This chapter first introduces technology/team “fit” issues, and then focuses on how cultural tendencies, specifically country-of-origin differences, can influence task/technology fit in virtual teams. Two case examples demonstrate some of the challenges to managing virtual teamwork across national boundaries. The chapter ends with some guidelines for practice. VIRTUAL TEAM TASKS AND TECHNOLOGY Regardless of the business problem being addressed by the team — whether it is a new product development task or a technology transfer task — members of virtual teams must: • Convey data-information-knowledge • Converge to shared meanings • Make task-related decisions Virtual teams also commonly have available to them a variety of technologies to enable their teamwork: for example, teleconferencing, e-mail, groupware, and sometimes videoconferencing. The management issue then becomes: what type of technology environment best supports the various activities and kinds of data–information–knowledge exchange inherent in the work to be conducted by a virtual team? Collaboration technologies can be described in terms of three dimensions:5 • Richness: the ability to convey verbal and nonverbal cues, and facilitate shared meaning in a timely manner • Interactivity: the extent to which rapid feedback is allowed • Social presence: the degree to which virtual team members feel close to one another For example, e-mail would be characterized as being relatively low in richness, low in interactivity, and low in social presence. In contrast, real-time applications such as video- and teleconferencing provide more richness in all three dimensions. Furthermore, both practice and theory suggest that the proper “fit” between a technology and a virtual team task should enhance performance. Specifically, different technologies may be better suited for conveying data–information–knowledge, while others are better suited for convergence-related tasks such as making decisions. For example, e-mail facilitates well the fine-tuning and reexamination of messages, but richer synchronous technologies (such as videoconferencing) are needed to resolve differing viewpoints among team members and to develop a consensus for decision making. 820

Virtual Teams: The Cross-Cultural Dimension Exhibit 1.

Two Case Examples

Air Products is a combined gases and chemicals company with products and services competing in industrial, consumer, and agricultural markets. Corporate headquarters is in the United States, with other company personnel based in more than 30 countries. TechCo provides software development, conversion, and maintenance services to Global 500 companies in 50 countries in North America and Europe. TechCo’s Offshore Development Centers, all based in India, perform a range of IT-related activities for global clients

Less well understood, however, is how the choice of and preferences for various technologies can differ across cultures. Specifically, for culturally diverse virtual teams, the issue raised is: do cultural differences influence the “fit” between a technology and a virtual team task? Below we report our comparative findings from virtual teams at two global organizations to better understand if, and how, cultural differences might matter. TWO CASE EXAMPLES The two organizations we studied are briefly introduced in Exhibit 1. Air Products, a U.S.-based Fortune 500 firm, deploys virtual teams to transfer technology expertise between the United States and various global sites. The technology transfer projects underway at the time of data collection were between the United States and sites in the United Kingdom, Italy, and Korea. Accomplishing the knowledge transfer task involved the exchange of (1) technical information (e.g., data, blueprints); (2) procedures (e.g., operating manuals); (3) best practices regarding operations; and (4) conceptual knowledge about a technology or potential applications. The second case example, TechCo, is an India-based, offshore service provider that uses culturally homogeneous virtual teams distributed across multiple geographically distributed development center sites to provide software development services to global clients. The software development practices employed by the teams within various phases of the systems development life cycle are well understood and have been assessed at Level 5 of the SEI Capability Maturity Model. While the tasks of these virtual teams differed (i.e., technology transfer at Air Products and software development at TechCo), the success of these virtual teams depended on the same two interrelated factors: (1) how well the process steps and tasks were defined, and (2) the effectiveness of communication between team members. The teams in both organizations had defined process steps and tasks to accomplish their teamwork, which required the exchange of (explicit) project documents as well as (implicit) human expertise and knowledge. Similarly, both team tasks required the conveyance of information and the convergence to shared meanings and decisions through communication. The virtual teams at both organizations also had similar 821

FACILITATING KNOWLEDGE WORK communication technology options available to them: that is, telephone, email, groupware or “repository” systems, and face-to-face visits. While the process steps and associated tasks at both organizations were well defined and the communication technologies were similar, the team outcomes differed: the virtual team efforts at Air Products were more difficult, time-consuming, and less successful overall than those at TechCo. Furthermore, members of the Air Products teams suggested that the extent of cultural differences between team members was a key factor contributing to these difficulties. For example, while training materials could be conveyed via e-mail and document repositories, these materials had been developed in the United States and they used terminology and phrases that would not necessarily be understood across cultures. This was true even for the U.S.-to-U.K. exchange. Moreover, social norms for interaction and communication varied between the U.S. and global sites in Europe and Asia. Effective technology transfer required not only completeness of document exchange, but also communications clarity for convergence to shared meanings, and the team members at Air Products believed that the only way to accomplish clarity was to use face-to-face meetings, which required international travel between sites. In contrast, although the actual team members involved in a TechCo project at any given time varied from phase to phase, the team members reported no communication difficulties among members that could be attributed to culture. A conspicuous difference between Air Products and TechCo was the degree of diversity in the cultural make-up of the respective virtual teams: the composition of the core development teams at TechCo was extremely homogeneous, with team members exclusively of Indian origin. These case studies provided initial evidence that cultural differences can matter in virtual teamwork. Specifically, cultural variability has a major effect on the difficulty (or ease) that virtual team members have in communicating with others to accomplish tasks. One potential explanation is that the choice of a technology for a given task may not be perceived the same across cultures. This is because team members from different cultures need or expect different things from an enabling communication technology. Below we take a closer look at what cultural dimensions could cause this. THE CULTURAL DIMENSION People learn patterns of thinking and acting from living within a defined social environment, often typified by national culture. As such, culture impacts a person’s communication preferences and behaviors. Researchers have found that negative and positive reactions to interpersonal communications may therefore be more understandable and predictable when 822

Virtual Teams: The Cross-Cultural Dimension an individual’s cultural context is taken into account.3 If we extend this notion to virtual teams, we can expect that people with different cultural backgrounds will have different perceptions of the “fit” between various communication technologies and tasks. Thus, differences in national culture could have important implications for organizations attempting to manage and support virtual teams. Three cultural dimensions may be particularly relevant in explaining cross-cultural communication differences: (1) individualism–collectivism, (2) communication contextuality, and (3) uncertainty avoidance.3,4,6 Individualism–Collectivism Individualism–collectivism is the preference to act as individuals rather than as members of a group. Team members from collectivist cultures (e.g., Korea and other Asian nations) may prefer that members complete tasks together. Conversely, individualist cultures (e.g., U.S., U.K.) may be more comfortable with loose ties among team members and the division of tasks. In a virtual team setting, this implies that: Team members from collectivist cultures prefer richer technologies to allow for more interaction — such as videoconferencing — to accomplish team tasks.

Communication Contextuality Communication contextuality refers to the amount of extra information needed to make decisions versus “just the facts.” In general, in low-context cultures, individuals do not give more or less information than necessary; rather, they state directly only that which they believe to be true with sufficient evidence. Individualistic cultures (e.g., the United States and the United Kingdom) tend toward low-context communication in the vast majority of their social interactions, whereas people in collectivistic cultures (e.g., Asian) tend toward high-context communication. In a virtual team setting, this implies: Team members from high context cultures prefer richer technologies to allow for the feeling of social presence, particularly when communicating with individuals they have never met.

Uncertainty Avoidance Members of cultures that are high in uncertainty avoidance (e.g., Japan) seek details about plans and have a lower tolerance for uncertainty and ambiguity. Conversely, cultures that are low in uncertainty avoidance (e.g., the United States and the United Kingdom) need fewer rules and are more comfortable with ambiguity. Furthermore, cultures with high uncertainty 823

FACILITATING KNOWLEDGE WORK

7HFKQRORJ\

3HUFHSWLRQVRI)LW

7DVN

&XOWXUH &RXQWU\RI2ULJLQ

Exhibit 2.

Culture Moderates Perceptions of Technology–Task “Fit”

avoidance tendencies prefer to avoid conflict and have a strong desire for consensus. In a virtual team setting, this implies: Team members from high uncertainty avoidance cultures prefer technologies (such as groupware) that create records of discussions and decisions.

In summary, then, cultural differences can moderate perceptions of the “fit” between any given communication technology and virtual team tasks2 — as graphically shown in Exhibit 2. Another important cultural dimension concerns differences in perceptions of time. Specifically, because virtual teams are dispersed across time zones, cross-cultural differences related to the temporal coordination of work can impact the conduct of team activities and, ultimately, the team’s performance.7 Individualist cultures (e.g., the United States) tend to monitor their time closely, while collectivist cultures (e.g., Asian) tend to be fluid in the use of time. These cultural differences in patterns of time management can therefore impact temporal coordination efforts in virtual teams. WHAT SHOULD MANAGERS DO? The author offers two key practical guidelines for managers dealing with culturally diverse virtual teams. First, explicit efforts need to be made at the organizational level to increase awareness of various cultures and the differences among them. At a team level, managers should develop strategies to manage cross-cultural communication differences among team members. These could include operating guidelines for the virtual team and language training. An initial set of face-to-face meetings is also recommended for virtual teams to help mitigate potential misunderstandings due to cultural differences as well as provide for the development of social relationships and trust. Detailed operating guidelines can then be used to help the teams coordinate their activities. Second, it is important to acknowledge that culture does moderate perceptions of communications task–technology fit. While managers can provide multiple communication options to virtual teams, organizations must recognize that technology itself likely evokes different meanings and reactions among individuals with different cultural orientations; as one West824

Virtual Teams: The Cross-Cultural Dimension ern saying puts it, “one size does not fit all.” Thus, a key to successful virtual teams is an assessment of the diversity in the group and the communication preferences of the various constituents. Managers must make an explicit effort to ensure that an appropriate set of communication technology choices is made available to the teams. The team itself then needs to come to an agreement on what communication media are appropriate for its various tasks. While this may not optimize individual choice, it should satisfy the collective needs of the virtual team. CONCLUSION As organizations have become more distributed across space, virtual teams are becoming increasingly common. Despite advances in technology and the growing deployment of virtual teams, many questions remain regarding the unique challenges and complex dynamics inherent to virtual teams. Selecting technologies that match the communication task requirements of a virtual team is a reasonable starting point. However, effectively supporting virtual teams also requires sensitivity to differences due to culture. Virtual teams require communication options that serve not only the work tasks of the teams, but also the different needs and expectations of culturally diverse team members. For teams with members from different countries of origin, cultural differences can affect the perceptions of, and preferences for, various communication and collaboration technologies, as well as time management behaviors. Technology may mediate communication, but as Hall puts it, “culture is communication and communication is culture.” The challenge, therefore, is for organizations to create virtual teams that work and exchange knowledge effectively across space, time, and culture. References 1. Duarte, D.L. and Snyder, N.T., Mastering Virtual Teams: Strategies, Tools, and Techniques that Succeed, Jossey-Bass, San Francisco, 1999. 2. Lipnack, J. and Stamps, J., Virtual Teams: People Working across Boundaries with Technology, 2nd edition, John Wiley & Sons, New York, 2000. 3. Gudykunst, W.B., Ting-Toomey, S., and Nishida, T., Eds., Communication in Personal Relationships across Cultures, Sage Publications, Thousand Oaks, CA, 1996. 4. Hofstede, G., Cultures and Organizations, McGraw-Hill, Berkshire, England, 1991. 5. Daft, R.L. and Lengel, R.H., “Organizational Information Requirements, Media Rchness, and Structural Design,” Management Science, 32(5), 554–571, 1986. 6. Hall, E.T., Beyond Culture, Doubleday, New York, 1976. 7. Montoya-Weiss, M., Massey, A.P., and Song, M., “Getting It Together: Temporal Coordination and Conflict Management in Global Virtual Teams, Academy of Management Journal, 44(6), 1251–1262, December 2001. See also Massey, A.P., Montoya-Weiss, M., Hung, C., and Ramesh, V., “Global Virtual Teams: Cultural Perceptions of Task–Technology Fit,” Communications of the ACM, 44(12), 83–84, 2001.

825

This page intentionally left blank

Chapter 66

When Meeting Face-to-Face Is Not the Best Option Nancy Settle-Murphy

When considering the need for face-to-face meetings, managers and employees need to ask themselves whether they can accomplish the same business goals by alternative communication means. Sometimes the initial response — and many times the right response — will be: “There is simply no substitute for face-to-face interaction.” However, business units and departments need to challenge themselves by questioning which objectives cannot be met without face-to-face meetings, and why not. For example: • Initial assumption: Our project team needs to work face-to-face at least once a month to iron out differences and create shared solutions. — New revelation: By diligently following a new process for reporting issues via e-mail and the Web, and by brainstorming solutions via weekly facilitated conference calls, our team can easily forego the monthly meetings in favor of quarterly meetings. • Initial assumption: The account team must present the proposal to the client in person to secure a purchase order. — New revelation: While the client would like the account team to present the proposal in person, he or she would gladly exchange the personal visit for a cogently written proposal that includes a persuasive business case, making it easier to sell the proposal upward to senior management.

0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

827

FACILITATING KNOWLEDGE WORK • Initial assumption: Our group has a number of issues and conflicts that are not being adequately resolved through remote means. We need to get together to thrash things out because the only way we know we are really being honest with each other is if we can see each other. — New revelation: There may be a number of steps your group can take that have not yet been tried, although you may eventually end up validating your initial assumption. First, find out why your current methods of conflict resolution are not doing the job. Are the right people involved? Too many? Not enough? Do some members fear that honesty will be punished in some way? If nonverbal communication is a critical missing piece, consider videoconferencing as an option. If your company has not made a sufficient investment in high-quality equipment, this may be a great time to make a compelling business case. • Initial assumption: Our new management team has just been appointed, and our new goals and success metrics have just been outlined. We are heavily dependent on each other if the team is to succeed, and we have little time in which to accomplish our goals. Team members come from several different locations and disciplines. Most of us have never met. We need to meet face-to-face to get to know each other and to begin to build trust. — Validation of assumption: You are probably right. Meeting face-toface will be essential in helping this new team coalesce quickly. However, make sure that ongoing team communications and principles for conflict resolution are some of the key topics you discuss in person. EXPLORING ALL VIABLE OPTIONS Assuming it is concluded that a face-to-face meeting is just not possible, what are some viable options to consider? It depends on many variables, such as the desired outcomes, how high the stakes are, participants, as well as their perceptions and pre-dispositions, familiarity participants have with each other, geographic locations, and cultural differences. Consider the following options to either replace face-to-face meetings, or as a means to augment less-frequent face-to-face meetings, perhaps with fewer participants. Conference Calls These should be used with deliberate planning and excellent facilitation. If your group must rely more heavily than before on telephone communications, make sure the calls are thoughtfully planned. Questions to answer in the planning of conference calls include: 828

When Meeting Face-to-Face Is Not the Best Option • What are the objectives of the call (i.e., decision making, issue reporting, information exchange, etc.)? Are objectives likely to change? • Who should participate in regular calls? Are delegates allowed? • Can someone participate “half-way”? For example, is it acceptable for members to read and send e-mail during the call, or take other calls if they come in? If so, under what circumstances? • If different national cultures are involved, have we established standards around the use of English (e.g., avoid the use of idioms and local slang), need for translation time, desire to keep responses concise and brief? • What time and for how long should we schedule the call? Are the times equally convenient (or inconvenient) for all participants? Can we consider shifting the time every other week, or every month, to accommodate all equally? • Who decides what the agenda topics will be? How are topics communicated? By whom? Who has input? • What preparation is required to ensure that participants make the best use of phone time? Who makes sure that everyone knows what he or she must do to prepare? What if some people come prepared and others do not? • Who facilitates the call? What are the principles regarding staying on track? • Does someone capture decisions reached, minutes, etc.? If so, who? Do we rotate this responsibility? • Do we need to establish additional mechanisms by which the team can share ideas, provide input, etc. between calls? Is e-mail enough, or do we need to think about some sort of chat forum or bulletin board posting? • Has someone secured a dial-in number that everyone has in advance? Do we have sufficient ports to accommodate everyone? Videoconferencing Videoconferencing is especially helpful for times when witnessing nonverbal communications will contribute to the groups’ overall objectives: • Repeat the checklist above, plus… • Do all participants have reasonable access to good videoconferencing equipment? • Have we created an agenda and allocated the appropriate time to meet our objectives, while making the best use of this technology? (For example, it probably makes sense to have people review any relevant material in advance, and then use the videoconference time to hash out issues, air differences, or brainstorm solutions, versus using the time to simply present material.) 829

FACILITATING KNOWLEDGE WORK • If presentations are to be made, how? For example, will they be presented on-camera or viewed by each participant via laptop? This will affect the planning and design of the presentation as well as the agenda. • Does the technology allow for smooth, steady communications, or does time need to be built in for long pauses between speakers? • Can we avoid scheduling videoconferences around meals? The sight and sound of people drinking and chewing can be distracting at best. • Who is responsible for booking the systems and conference room? E-mail E-mail messages can help foster and sustain open communications if used judiciously. Questions to answer might include: • Who is on the “To” list, and who gets cc’d? Under what circumstances? What are the implications? (For example, those on the “To” list need to provide a response; those on the cc: list need not.) • Should we assign a convention to connote relative sense of urgency? (Example: A “U” in the subject line indicates urgent; an “A” indicates some sort of action is required; and an “I” signifies “FYI only.”) • Have we agreed on a standard for turnaround time? In what cases? Do some people in the group need a faster turnaround time than others? (For example, do our colleagues in Asia need a quicker response so that they can get the answer they need by the next day?) • Do we have standards regarding brevity, accuracy, and clarity? (This is especially important when the group includes non-native English speakers.) • Have we agreed when e-mail is or is not appropriate? (Example: We do not use e-mail publicly to “call” each other on mistakes or problems. If we must, we confine our distribution list to as few people as possible.) • Are there any constraints in the size of attachments we can send? (For example, do we “zip” all files over 2 Mb?) Web Meetings Use Web meetings, especially when real-time interaction is important and anonymity may be desired: • See the checklist for conference calls, plus… • Does everyone have equal access to the technology? Is dial-in access speed acceptable for all participants? • Have we agreed on principles regarding timing, agenda flow, facilitation of questions and answers, ownership of minutes, and other key elements? • Will we use the phone as well as keypads to communicate? 830

When Meeting Face-to-Face Is Not the Best Option • Have we established whether questions will be submitted anonymously or openly? Will we determine this for each meeting? • Has everyone received any review material far in advance of the session, so participants can queue up whatever they need in advance? Real-Time Data Conferencing and Electronic Meeting Systems with Audio/ Video and Text and Graphics Support This technology can be a powerful way to share information, discuss and brainstorm, and make group decisions. To what extent technology such as this can be used productively depends on a number of variables. Among them are the number and role of participants, company culture, access to and comfort in using the technology, degree of proper preparation, and overall effectiveness of the groups’ ability to collaborate. Many teams find it helpful to create matrices similar to the example in Exhibit 1 to guide them as to what meeting methods they will use as a default, depending on circumstances and objectives. MAKING THE BEST OF FACE-TO-FACE MEETINGS Face-to-face meetings in many cases will still be needed. To make sure that such meetings deliver the desired outcomes, answer the following questions during the planning phase to ensure the greatest meeting ROI: • Who most needs to meet face-to-face? What are the implications of including some and excluding others? • Do some need to meet face-to-face with greater frequency than others? • Is it possible to rotate staff or team members who attend face-to-face meetings so that all may benefit? • Have we allocated the appropriate amount of time, given the objectives, the stakes, the participants involved, and the general state of affairs? (For example, if this is a critical relationship to build or repair, be sure to allocate time for social exchanges in addition to the required business meetings.) • What preparation will help participants make the best use of their time together? Are there any documents that can be reviewed in advance to allow participants to maximize face-to-face time together? • Is everyone clear on all of the objectives? Are there any objectives that need to be surfaced in advance to allow everyone time to prepare? • Can we alternate meeting times and venues to accommodate everyone? Or is deferring to one person or another important in terms of status, relationship, or some other factor? • Have we thought about how we can involve those who could not be present? For example, is it possible for them to participate via the phone at any point? Can they help with the planning? Will they receive a meeting summary? 831

Objective

Communication Method

Frequency

Leader

Details

Status reports

Conf calls and one-page status reports

Weekly as a rule

John L.

Certain project phases may require daily communication

Project planning and revisions

Face-to-face with one representative from each discipline or function

Quarterly

Anne P.

Input required from all participants; conf call to be held during last two hours for all

Conflict resolution (minor)

Face-to-face or telephone among affected people is preferred; e-mail among affected people if conversation not possible

As needed

Team member who perceives a problem or conflict will or does exist

Conflict resolution (major)

Same as above, except that e-mail should be used only if absolutely necessary

As needed

See above

Senior team leader may initiate action much of the time

Conf calls and Web meeting with entire team

Monthly

Anne P.

Two consecutive half-days, alternating time

Face-to-face with all team members

Annually

Anne P.

Plan for four working days

FACILITATING KNOWLEDGE WORK

832

Exhibit 1. Default Meeting Methods

When Meeting Face-to-Face Is Not the Best Option SUMMARY Organizations need to be clear as to what objectives really mandate the need for face-to-face meetings, and which can be met through alternative methods. While face-to-face will always be the preferred choice for creating new relationships and repairing those that have become fragile, other options — if used thoughtfully and with careful planning — can be surprisingly effective in achieving a wide range of objectives.

833

This page intentionally left blank

Chapter 67

Sustainable Knowledge: Success in an Information Economy Stuart Robbins

Three concepts set the stage for this discussion of what is knowledge, what is knowledge management, and the critical role of collaboration behaviors in an information economy. First, we are an undeniable part of the technology that connects and sustains our personal and professional lives. Consequently, to improve our information systems, we must understand the relationships between the people that built them. In short, the systems are a mirror of those relationships.1 The second principle, in abbreviated form, is an understanding of the fundamental difference between a commodity-based economy (19th and early 20th century) and an information-based economy (late 20th century): in an information-based economy, the value of a piece of information is increased when it is shared rather than hoarded. The third foundational theorem is based on Metcalfe’s law: every node on a network increases the value of the network to which it is added, every phone adds to the usefulness of the phone network, and every accurate fact adds to the value of the library in which it is housed. The “equation of value” implicit in these principles suggests that organizations are nurtured by systems that connect them in meaningful ways. DEFINING THE UNDEFINABLE: WHAT IS KNOWLEDGE? Although it is difficult to talk about institutional knowledge, it is useful to think about knowledge as the top of a “ladder of business intelligence.”2 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

835

FACILITATING KNOWLEDGE WORK Davenport and Prusak3 have provided a similar concept of graduated stages as follows: Data p Information p Knowledge

Data, in its simplest terms, is best defined as individual facts stored in databases, one element per field in a table. An example would be a street address. By itself, unrelated to any other pertinent data, it is just a number combined with some words. However, when you relate that piece of data to another individual element — for example, my name — the two facts combine to form Information about me. By itself, unrelated to any additional pertinent information, it is nonetheless just a name and an address, one of millions. However, when you add a third element of information to the two previous pieces (i.e., that I have a five-year old son), you begin to intuit a great deal about me. Knowing who I am, where I live, and the fact that I have a young son allows you to postulate many things about me (i.e., buying habits or personal concerns). If you are also a parent of a young child, you add what you know (i.e., your personal experience) to what you have learned about me, and you can begin to develop Knowledge about me, actively deduced in your effort to comprehend the data/information you have been given. Davenport and Prusak3 rightly emphasize that the central element differentiating knowledge from data or information is the role of the human mind in its formulation: knowledge is something we actively and organically create, nurture, and provoke. It is not simply the overlapping of three related pieces of information that allows you to know a great deal about me from a small bit of text. Rather, it is your own intelligence and reasoning at work. Our intellectual wrestling match, with the information, invests it with meaning — it is this invested meaning that best describes the alchemy of knowledge.That alchemy — an understanding of human knowledge and its relationship to other disciplines — is not an invention of Silicon Valley. Indeed, the hubris that has been prevalent in the “Knowledge Management” industry excludes a much broader framework from which to understand knowledge, and define it in meaningful ways. Knowledge did not suddenly become an asset during the past 20 years, as we entered the computer age and as our economies entered the information-based era. Human knowledge has many categories: analytic, scientific, phenomenological, empirical, hermeneutic, self-reflective, professional, and technical. The discussion of its qualities and composition has been central to the history of philosophy beginning with Emmanual Kant, followed by Hegel and Marx and Habermas, and continuing today. 836

Sustainable Knowledge: Success in an Information Economy It is a broad and remarkable discipline. To exclude that historical discussion from an effort to define knowledge, even in an institutional context, is understandable. Few can confidently embrace these immense concepts and themes of the philosophy of knowledge and, yet, it has been a substantial error in judgment on the part of an industry that is based squarely in its midst. We must acknowledge that this complex discipline is relevant, and our (admittedly minor) efforts to create useful taxonomies or design reliable methods for the navigation of technical and business information are only a small part of that history. In much the same way that current NASA research is based on aerodynamic principles prototyped at Kitty Hawk, our technical efforts to create usable knowledge bases on the Internet are solidly rooted in the history of information management, traceable to Dewey’s decimal system and the library sciences that have been the framework of the non-digital world of knowledge management for decades. Error #1: Our exclusion of this “knowledge heritage” has been an essential error that has handicapped our capacity to build artful and useful knowledge management products.

Therefore, any successful definition of institutional knowledge (the sum of corporate experience, intellectual assets, data/information frameworks, technical expertise, customer profiles4) is best served by comprehending our current solutions as a cycle in the evolution of humankind’s ongoing effort to master the world around them — our dictionaries, encyclopedias, databases, and search engines, and our capacity to use them effectively in our professional and personal lives. Only then can we concentrate on definitions that are meaningful, yet targeted in scope. DEFINING THE IMPOSSIBLE: WHAT IS KNOWLEDGE MANAGEMENT? The management of institutional knowledge is never a simple task because it involves people. Unlike the elements of metadata in our databases, or the transactions in our ERP systems, the comprehension of that data-to-information flow that creates knowledge is necessarily human in dimension, that alchemical transformation in which the primary catalytic agent is an individual’s intellect. In the words of Davenport and Prusak:3 Since it is the value added by people — context, experience, and interpretation — that transforms data and information into knowledge…the roles of people in knowledge technologies are integral to their success. p.129

Too little attention has been paid to the key business processes that must be transformed for the benefits of KM initiatives to deliver their 837

FACILITATING KNOWLEDGE WORK promised return-on-investment projections. In the words of Sheldon Laube, CEO of Centerbeam, as published on the company’s Web site in May 2000: KM is really about people interacting with people…This isn’t about installing one-size-fits-all software and waiting for the magic. This is about understanding the complex and often very subtle ways that groups of people interact and collaborate when pursuing a common goal — and designing processes that better support such interactions.

However, technology companies would much prefer to develop new algorithms than new ways of talking to people. IT consultants are frequently more adept at data analysis and schema design than they are in the subtleties of organization development, decision-making processes, and conflict resolution. And executives are more inclined to invest money in systems that can be capitalized (purchased) or expensed (leased) than they are inclined to confide (to outside parties, or even to their own employees) about their company’s inadequate inter-departmental cooperation. Error #2: It is not possible to create IT systems to enhance collaboration in an environment that does not nurture collaborative behavior.

The successful KM solution must involve the key components of an institution: the people, as well as the systems they build and the data they gather. If the systems are viewed as troublesome, it is a reflection of the broken relationships in the corporate structure. To improve those systems (and improve the quality of information contained within them), you must start with an effort to improve the relationships between the people who construct, support, and use those systems. People trust people, not the technology that exists between them.5 The systems alone will not elicit change. Tools are enablers, at best. There must be direct and constant intervention, at every level of the organization, in the ongoing growth of a “knowledge culture.”6 A knowledge culture is a culture of trust, fueled by senior executives, and threaded by values that encourage the sharing of institutional data–information–knowledge, rather than the hoarding of those critical assets. Reflected in the IT systems that mirror its values and processes, a knowledge culture increases its value every day. It is the essential element — the heart and mind — of an information-based economy. It is a people issue, not a technology issue, and it begins at the executive level. A FRAMEWORK FOR COMMUNITY The nature of institutional knowledge requires that a pluralistic framework for decision making be addressed early in a knowledge management initia838

Sustainable Knowledge: Success in an Information Economy tive. A cross-functional team composed of empowered representatives from all functional areas needs to be created. This is Phase 0 — the formation of a cross-functional and interdepartmental team (or, in the case of cross-institutional knowledge projects, a hybrid team composed of each institution’s key contributors). The team needs to address how it will be governed, the processes and vocabulary of that governance, and the roles and responsibilities of various team members. In some cases, such a team may never have existed within the company before, and many of the managers could be unskilled in the matrixed style that is required. Phase 0 is the beginning of a community, the result of a knowledge culture strengthened by a clearly defined socio-political framework. Such communities can assume many forms. Project teams, business webs, professional associations, customer advisory boards, collectives — each with a unique culture bounded by a framework and a clarified set of objectives. The lingua franca of these communities is the intellectual capital that is exchanged between the members of such communities.7 From this perspective, knowledge management can be defined as a strategic approach to maximizing the intellectual capital within and between communities, with real and implicit value to be derived by the larger institutions that sponsor them. Again, it is ultimately the ability of the institution’s leadership (the executive teams and knowledge management project sponsors) to create and nurture an environment in which a trusted exchange of information/knowledge exists. That exchange must be rooted in a framework of behaviors that are the genesis of community, and that provide the autonomy and creativity such communities aspire to promote. CHANGING INSTITUTIONAL BEHAVIOR To help customer communities adopt the behaviors that are most conducive to knowledge sharing rather than hoarding, it is critical that the KM vendor community adopts those same behaviors. The vendor community must engender trust. It must expand, rather than discourage, the customer’s ability to move data–information–knowledge between systems, between institutions, between networks, and through firewalls. Error #3: The KM industry has unknowingly committed a fundamental error by ignoring a central ingredient of collaborative success: not adopting an “open source” approach to their data models, search engines, and API libraries (which would embody the kinds of behaviors that the vendor hopes to encourage in the customer community). 839

FACILITATING KNOWLEDGE WORK In an information-based economy, the value of an idea is increased when it is shared, and dwarfed when it is hoarded. Hoarding behaviors (an “us versus them” approach to competitors) prevent trusted partnerships and preclude the creation of communities. Most of the barriers to cross-institutional collaboration can be traced to the lack of standardized protocols and connectivity between those institutions. The KM industry must practice what it preaches, or risk the continued neglect of its technology. CONCLUDING REMARKS: THE CONCEPTUAL AND THE REAL Constraints on our organizations are quite real: economic downturns (a breach event that is not easily redressed), cultural differences between merged or acquired entities, and budgets that are inadequate to meet the scope of our many demands. Maximizing success in our constrained environments often means compromise in the areas of effort that are difficult to quantify, difficult to articulate, and even more daunting, difficult to understand. As such, knowledge initiatives are often de-prioritized, particularly in institutions where collaboration is not reinforced at the executive level. Yet the urge to move faster with fewer resources is a massive hurdle for knowledge initiatives to overcome: Velocity + Knowledge = Success but Velocity  Knowledge = Failure

No organization can move quickly if it is mired in repetitive tasks. Recreating a template because the source file has been lost is a waste of valuable time. Rewriting an entire contract because the previous version cannot be found is a waste of time. The repeat interview, the repeat meeting, the repeat analysis…. Our corporations and companies are filled with daily examples of intelligent people re-doing something they have done before, rather than spending their valuable time on something new. Learning organizations — the ones that are not only capable of, but adept at adjusting to market changes or new challenges and quickly producing an excellent response — understand the value of leveraging their intellectual resources, of paying attention to their “lessons learned” and teaching those lessons to newcomers. In the end, the value of most knowledge management efforts is not (as some have proposed) the creation of an exceptional and useful repository. Rather, it is the dynamic creativity that can be unleashed, within and between institutions, in those liberated margins of time that allow us to think, and to innovate. 840

Sustainable Knowledge: Success in an Information Economy In an information economy, success is achieved by creating a humancentric, collaborative environment — methodologies and systems — that ensures sustainable knowledge. Notes 1. For an expanded examination of this analogy, see Stuart Robbins, “The System is a Mirror: Turbulence and Information Systems,” Proceedings of the 13th Annual International Conference on Systems Documentation (ACM), October 1995, 138–147. 2. Cates, J., "Viewpoint: Talking Tech to Business Executives," CIO Insight, March 1, 2002. 3. Davenport, T. and Prusak, L., Working Knowledge: How Organizations Manage What They Know, Harvard Business School Press, Boston, 1998. 4. Kalish, D., “Knowledge Management across the Enterprise,” Unpublished working paper, Scient Corporation, 1999. 5. Friedman, B., Kahn, Jr., P., and Howe, D., “Trust Online, “Communications of the ACM,” Volume 43, Number 12, December 2000, 34–35. 6. Hauschild, S., Licht, T., and Stein, W., “Creating a Knowledge Culture,” The McKinsey Quarterly, Number 1, 2001. 7. Tapscott, D., Ticoll, D., and Lowy, A., Digital Capital, Harvard Business School Press, Boston, 2000.

841

This page intentionally left blank

Chapter 68

Knowledge Management: Coming Up the Learning Curve Ray Hoving

A new level of IT capability is emerging: knowledge management (KM). With KM, the intellectual assets as well as the information assets of the firm are maintained and applied through information technology. By capturing and organizing the pockets of knowledge within a company, the firm’s core competencies can be developed and maintained. Once stored and made accessible electronically, this know-how can be shared as needed among employees and business partners. KM can enhance organizational effectiveness in numerous ways. The following examples are based on the author’s research of KM practices in several corporations: • A major food company captures extensive market research information to understand consumer preferences and then applies its marketing know-how to respond with the right products, packaging, and advertising programs. • A large financial services company provides superior customer service by giving its customer representatives the information at their fingertips to immediately respond to questions from their investors. • A construction materials company captures manufacturing know-how to ensure consistent, quality, low-cost production. • A global electronics firm links its research and development activities throughout the world to accelerate the time-to-market new products.

0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

843

FACILITATING KNOWLEDGE WORK • A consulting company creates a repository of the insights gained from its various engagements across time to reapply this know-how for new clients. • A large group of hospitals combine the know-how of its many specialists to provide computer-assisted diagnosis and treatment of chronic diseases. • A major reinsurance company combines the knowledge of actuarial science with interpretation of the latest medical research to precisely predict trends in health care costs. • A large chemical company taps the knowledge of its scientists, engineers, and manufacturing personnel to develop health, safety, and environmental programs along with emergency response preparedness. • A major pharmaceutical company provides a systematic way to document clinical research to accelerate FDA approval. THE DEFINITION OF KNOWLEDGE MANAGEMENT Emerging topics are often difficult to clearly define in pragmatic terms at first. I offer a simple definition of KM, in 25 words or less: The effective creation, use, and preservation of organizational know-how; in a collaborative business environment; enabled by use of advanced information technology tools and methods.

Several key points are built into this definition. The emphasis is on organizational know-how versus individual know-how. While the knowledge of individuals within a company makes up its organizational capability, the power of an organization comes from its ability to integrate each employee’s knowledge and build it into its fabric. Although more difficult to master, the collective organizational know-how of the firm represents its true core competency. The definition emphasizes preservation of knowledge as much as its creation and use. Useful knowledge atrophies quickly when not organized and maintained systemically. Organizational knowledge transforms into folklore and then into a distant memory, only to be reinvented by the next team assigned to the job. Preservation requires diligent effort to keep the knowledge fresh and applicable. The definition also requires an important organizational context: a company culture and business environment that supports, expects, and rewards for collaborative behaviors. The natural resistance of individuals to share knowledge must be overcome by a superordinate goal of cooperation for the good of the whole. Although IT can enhance collaboration by

844

Knowledge Management: Coming Up the Learning Curve

Exhibit 1.

KM: The Next Significant Plateau

providing easy-to-use tools for sharing know-how across geographic and organizational boundaries, the company must have this strong desire for teamwork through information built into its culture. The third element of the definition calls for the use of advanced information technology. Some may argue that KM can take place without computers. However, I believe its use would be limited in most organizations of any significant size. The ability to provide instant access to shared information, and to enable rapid communication among employees, its customers, and business partners, is fundamental to successful KM. The simplest way to think of KM is as the next level of capability in the application of computers for business solutions. Exhibit 1 depicts the eras of computing across the past four decades. The earlier computer systems could only deal with hard data such as accounting records. Now, thanks to the tremendous strides in areas such as electronic document management, information retrieval and library science, and image processing, we can store less-defined, unstructured data and still make sense of it. The unstructured world is where most thinking and communication takes place (see Exhibit 2). It is said that only about 20 percent of the knowledge of an organization is captured in traditional transaction-oriented computer systems. The other 80 percent can be found in memos, lab notebooks, engineering diagrams, e-mail exchanges, sales contact reports, work procedures, and educational material. Most of this has been captured electronically in one way or another. The challenge of KM is to organize this unstructured material in a way to make it come alive for the organization.

845

FACILITATING KNOWLEDGE WORK

Exhibit 2.

The Context for Knowledge Management

THE BUSINESS CASE FOR KNOWLEDGE MANAGEMENT Based on my research, I have derived four compelling reasons for an organization to invest in KM. Each is discussed in the following paragraphs. 1. The only sustainable competitive advantage of a company is the organization’s ability to learn, remember, and change. One of the first companies

to develop the concept of a learning organization was Royal Dutch Shell. Management believed the only truly sustainable competitive advantage of a firm is its organization’s ability to learn. This thinking was further enhanced by Peter Senge’s work on the learning organization in the late 1980s. My statement takes this thinking a step further. Granted, it is necessary for a successful organization to be continuously learning about things such as the dynamics of its marketplace and the best way to invent, produce, and deliver products and services. However, although learning is necessary, it is not sufficient. An organization must have the means to apply its knowledge and change positively as a result. KM provides a consistent, organized way to capture learning in a way that encourages the organization to make positive improvements. A third element of this principle, remembering, is also necessary to sustain success. Organizational memory can atrophy quickly if not cared for. KM provides the means to retrieve what is relevant at the right time. The 846

Knowledge Management: Coming Up the Learning Curve blending of man and machine through KM application provides an organization with an extremely powerful resource. Humans are wonderfully creative but terribly forgetful. Computers have yet to become inventive, but once stored and categorized properly, they do not forget one bit of information. 2. Employees will be changing companies and careers more in the future. Organizational memory will deteriorate faster unless overtly preserved.

The career model of the workforce is undergoing revolutionary change. Today’s worker is being motivated by employability more than by employment. They are switching jobs and careers much more often than their previous generation. The people you had expected to stay in your company to perform in key positions to invent, make, and sell your products; to provide work process leadership; and to preserve core competencies cannot be counted on to stay. Most of the time, when they leave, their know-how goes with them. By building KM into a company’s culture, it is expected that this know-how is to be shared among the minds of its employees and owned and maintained by the corporation through the use of information technology. 3. High-performance companies embed genuine teamwork in their culture. Organizations like to think they perform as a team, although unfortu-

nately, many do not. The context for greatness through teamwork is found in the company’s culture. A positive culture comes from the behaviors of its senior leaders: those who walk the talk, by demonstrating through their own actions that cooperation and active mutual support among employees yields the best return for customers and shareholders. With organizational culture as it context, communications is the tool for achieving teamwork. KM offers a systematic way for employees to communicate among each other, sharing their know-how, and cooperatively inventing new solutions to improve organizational effectiveness. 4. Information technology has reached a level of capability, ease of use, and cost performance to enable collaborative computing across distance and time. IT is now enabling an extended reach for organizational communica-

tion with the advent of personal computers, easy-to-use software, and sophisticated communications networks. A meeting of the minds previously required a physical gathering. Now, information, ideas, and action plans can be communicated as effectively across distance and time, as down the hall in the same building. If applied properly, IT enables largescale, diverse global corporations to have the intimacy of people working closely together for the good of the company. The trouble is that this powerful tool can be as easily misused. Information overload has run rampant. People can spend an entire day just reading and responding to e-mail. Peo847

FACILITATING KNOWLEDGE WORK

Exhibit 3. Knowledge Application by Industry Sector

ple seem to be following the motto: “When in doubt, send it out.” The process discipline found in KM makes sure the right information is available to the right people at the right time. KM plays an important role in a company regardless of its industry. Knowledge fits directly into the value proposition of the service sector, where it is embedded in the products sold to the consumer. Major consulting organizations, such as PricewaterhouseCoopers, Andersen Consulting (now Accenture), and Ernst & Young, began concerted efforts to manage their intellectual property several years ago. Today, they have sophisticated KM systems to capture and preserve their know-how derived from research and previous engagements, and extend it into their future business. In the manufacturing sector, knowledge is applied more internally for research and development (R&D) production, distribution, and marketing excellence. Exhibit 3 depicts, in graph form, the contrast of the application of knowledge by industry. Now take a commodity industry, farming for example, and explore its use of KM. I live in the country in an old farmhouse on five acres. There is a 15-acre tract of land adjacent to my property, farmed by a neighbor. Each year, we wonder whether the crop is going to be corn, hay, or soybeans. The farmer rotates these crops, depending on soil conditions and economic predictions of demand. He uses “no-till” planting to avoid erosion. 848

Knowledge Management: Coming Up the Learning Curve Through the use of seed genetics, he is able to select varieties that are herbicide resistant and provide maximum yield in our Pennsylvania climate. Knowledge is certainly not embedded in the product sold (I do not believe anyone has declared soybeans a brain food), but the know-how to be a successful farmer is quite sophisticated. The success and survival of any company are predicated on its knowing how to do something better than its competition. An organized approach to developing, applying, and sustaining this know-how is a competitive necessity. BEST PRACTICES Based on my own research findings, companies will actively promote KM when (1) the business benefits of KM are obvious to executives, (2) teamwork through electronic collaboration is commonplace within the company, and (3) corporate computing infrastructure and transaction application systems are in good shape. Below are three excellent examples of best practices in thinking about and deployment of KM. Creating Organizational Readiness In 1998, Kraft Foods North America was a $17 billion-a-year food business of Phillip Morris. With its 70 major brands, the food company was the leading marketplace innovator in the food industry, having more than 300 new patents granted since 1990. Each day, more than 100 million consumers across North America enjoy at least one Kraft product. One out of every 10 cows do their work for Kraft. Jim Kinney, the former CIO at Kraft Foods, was both a visionary and a pragmatist. Like most CIOs of large corporations, his emphasis in 1998 was on blocking and tackling by getting over the Y2K hump and completing the implementation of large-scale transaction systems. However, Jim also made sure he and his organization would be prepared for significant new waves of IT application, such as KM. Kraft created the beginnings of its knowledge repositories under the directions of its then Chief Marketing Officer, Paula Sneed. They amassed a history of multi-media marketing campaigns to emphasize brand awareness, providing online access to consumer research, and creating a comprehensive technical research library for use throughout Kraft. Kraft’s vision for KM focuses on four areas: 1. Preserving and sharing internal knowledge about the business. 2. Promoting team-based initiatives across geographies and time. 849

FACILITATING KNOWLEDGE WORK 3. Reading the pulse of the consumer. 4. Reducing time to market. Kraft has an extreme consumer orientation in which its culture and essence are driven by understanding and responding to consumer preferences. It has targeted the research and development organization as an early adopter of KM application, both with regard to food science and consumer needs. CIO Kinney was ensuring that the IT-based environment to support KM will be ready to respond to the business drivers coming out of the vision. Weaving KM Into Strategic Intent In 1998, the Lincoln National Reassurance Company (Lincoln Re) was a subsidiary of Lincoln Financial Group, located in Fort Wayne, Indiana. It provided insurance to insurance companies, enabling these insurers to manage their life-health insurance risk and capital portfolios. Insurance companies have traditionally been information-intensive companies. Their success depends heavily on actuarial knowledge and risk predictability for customer segments. Reinsurance services require an even greater precision to achieve growth and profitability. Lincoln Re views KM as strategic to their business as evidenced by the 1997 annual report titled “Lincoln Re-Knowledge Management.” Art DeTore was Vice President of Strategic Planning and Knowledge Management for Lincoln Re. Larry Rowland was the President and CEO. As DeTore put it: “Knowledge Management is the heart and soul of our organization. It is strategic to our business and woven into all of our work processes.” Rowland further emphasized the importance: “Most companies focus on economies of scale; at Lincoln Re we focus on economies of knowledge.” Application of KM at Lincoln Re has produced strong business results. Management of structured knowledge in the form of expert systems rules and unstructured knowledge, usually found in documents, has become a core competency. The evidence of its success is overwhelming. Results of an independent study comparing more than a dozen leading reinsurance companies clearly differentiated Lincoln Re as providing the greatest value in the eyes of the consumer. One third of its new business comes from extensions of the value chain beyond classic reinsurance sales. Lincoln Re’s success can be attributed to several factors: (1) a sustained investment in IT and KM for over a decade, (2) a broad view of KM that is shared by all executives and employees, (3) embedding KM directly into business work processes, and (4) establishing performance measures that directly relate the KM activities to the income statement. 850

Knowledge Management: Coming Up the Learning Curve Overcoming Barriers to Knowledge Sharing Air Products and Chemicals, Inc. is a Fortune 250 manufacturer of products separated from air (e.g., nitrogen, oxygen, carbon dioxide, and argon), hydrogen, gases-related equipment, and intermediate chemicals. By 1998, its chemicals business had grown to exceed $1 billion in sales annually. The chemicals group of the company is an innovator with the use of collaborative computing technologies and methods. Glenn Beck, the director of chemicals group IT, reflected on the evolution of KM applications in chemicals: “Our Knowledge Management journey has proceeded in four stages of experiential learning. We first began by using DEC’s All-In-One for our scientific community on the VAX platform. We then phased out All-InOne with Notes Mail and discussion databases. In our third stage of learning we found out what true collaborative computing is all about. We are just now beginning to understand the social and cultural issues/opportunities of collaboration. We see the fourth stage, which we are just entering now, as true Knowledge Management where we are preserving and sharing our intellectual capital.” Vince Grassi, the manager of chemicals process modeling and control, was a strong proponent of collaborative computing throughout the process technology community in the chemicals group. This group represented more than 400 people in engineering, R&D, and manufacturing plant sites worldwide. All documentation related to its engineering efforts was kept electronically in discussion forums and databases. This enabled a tremendous amount of knowledge to be captured and redeployed as needed. As Grassi put it, “We follow a simple principle: ‘If it’s not in the database, it never happened…’ ” The practices resulting from this firm leadership have generated many early successes for the chemicals group: • Electronic conferences create good ideas. Traditional meetings with everyone present in the same time and space are biased toward those who can speak up spontaneously. Although this is great for bold and garrulous participants, ideas from those less spontaneously eloquent often get suppressed. Grassi related an example: “We had a meeting of plant and home office engineers to discuss production improvements. It was a well-run, classic jousting session that created some good ideas. The best idea, however, came two hours later when an engineer, usually quiet in meetings, crafted his thoughts as an entry in the electronic discussion database. This idea yielded a $200,000 benefit by changing the instrumentation system and workflow in the plant. We 851

FACILITATING KNOWLEDGE WORK have seen how electronic forums such as these open up new avenues for employee innovation.” • Knowledge of the whole organization exceeds the capabilities of the best individuals. The 400 scientists and engineers in the chemicals group contributed to over 100 project knowledge databases, which have been established around key products and processes. All related documentation and communication are kept in the knowledge repository. Grassi stated, “If knowledge resides in each individual, the organizational knowledge is very limited. By leveraging our knowledge, we can exceed the capabilities of our best experts and make it available to all.” • Impact of staff turnover is minimized. An engineer, who had been working for three years on a process model for polymer manufacturing, had transferred to a different group to take on a new assignment for the company. Although this was an important career move, there was great concern that his work would be lost. The engineer who took over for him was not nearly as familiar with the process modeling tools. However, after reading over all the well-organized notes in the knowledge database, the new engineer came up to speed in record time. This truly amazed the scientific community, made believers in KM out of many of them, and, most important, enabled the polymer modeling program to continue without any loss of knowledge. • Product introduction time is greatly reduced. Air Products’ Specialty Chemicals business acquired technology of a new product from another company. This intellectual property had to be translated into English, understood, and used as a base for creating manufacturing capacity in three plants located in three different countries. Use of Notes and the KM practices in place at Air Products enabled manufacturing of this product to go online and on budget in record time. Leadership in the business community has been enthusiastic, unwavering, and even forgiving to an extent. This atmosphere, where employees believe in such principles as “if it’s not in the database, it doesn’t exist,” has made a significant difference in successful deployment. Employees have no choice but to share knowledge because the only accepted repositories are in shared databases. The natural resistance to sharing knowledge has been overcome through use of this tool, along with appropriate rewards and incentives from management. Attention to quality is also extremely important. Each of the 100 knowledge databases had an assigned moderator. Guidelines for content management were published and discussion forums were monitored for quality and consistency. These hygienic approaches were critical to achieving successful deployment of KM initiatives. Overall progress and conformance to strategic intent were ensured through a “collaborative business solutions 852

Knowledge Management: Coming Up the Learning Curve steering team” of key senior managers. User ownership and accountability were clear. GETTING STARTED WITH KNOWLEDGE MANAGEMENT KM is the next significant extension of IT capability in organizations. Corporations should prepare for widespread use by planning the KM architecture and experimenting with pilot applications. There are, however, four prerequisites to achieving success with KM: 1. The basic computing and telecommunications infrastructure needs to be in good shape, including broad use of office tools and e-mail for collaboration. KM requires a computer-literate workforce utilizing tools that enable them to easily communicate and share information with each other. This process requires a foundational infrastructure to be established, standardized, and made operationally sound. The building blocks to KM include a telecommunications network with the capacity to handle transfer of fairly large amounts of information, a PC and server environment that can readily handle introduction of new applications, and a database environment for consistent storage and retrieval of corporate data. In addition, fundamental office tools for document preparation, spreadsheets, and presentations need to be consistent throughout the firm. And, finally, an email capability with the capacity and ease of use to encourage widespread collaboration is a must. 2. The company’s transaction-based applications and databases need to be extensive, timely, and accurate. Given the premise that KM is a culmination of both structured and unstructured information, it is necessary for a company to have the basic transactions of its business enabled through IT. If KM programs within a firm only concentrate on high-level subjective information derived from the intuition of its leaders, the program would be missing the essential ingredients of the analytical data needed to make the right decisions. The results of many years of investment in the fundamental transactional systems of a corporation will yield new insights as companies get a handle on a system’s operating details in order to derive knowledge from the evidence. 3. Senior management needs to be educated on the concepts and benefit potential in order to endorse KM investment. They say that education is the key to understanding. Most enlightened senior managers welcome the opportunity to become educated regarding new concepts of value to their company. However, in our time-constrained world, education must be accomplished efficiently. Short, to-the-point educational experiences that weave the theory and concepts with practical examples relevant to their company are just what executives 853

FACILITATING KNOWLEDGE WORK seek. As stated earlier, emphasis on benefit potential must be given the highest attention. It is difficult to relate any IT investment directly to bottom-line results such as return on investment and return on equity. IT is so pervasive and integrated with work processes that the direct cause and effect of IT alone are difficult to measure. Given this, many people shy away from doing benefit analyses of IT investments. However, it is much better to be approximately right than to be precisely wrong. Linking KM investment proposals to nonmonetary operating indicators, such as cycle time reduction, reduced error rates, and improved yields, will enable executives to relate these key operating indicators to their intuitive feel of bottom-line return. 4. The culture and reward structures of the organization must be supportive of knowledge sharing among employees. This is the toughest nut to crack. It is not that people do not want to change; it is just that they do not want to be changed. This makes behavioral modification the most difficult to achieve in individuals, and it is further compounded when the objective is organizational behavioral change. People will take the path of least resistance virtually every time. When the pain of change is less than the pain of staying where they are, people will move to the new state. If a company’s culture allows people to hoard their knowledge and compete within the firm, people will go right on doing it. When people see that their performance and their careers are in jeopardy if they do not get with the new program of collaborative team-based business conduct, they will choose the new path. The Air Products case provides an excellent example. Once the prerequisite conditions are met, proceeding with KM is a much less daunting task. However, the best way to assimilate new technologies and work processes is through planned experimentation and piloting before full-blown implementation. Experiments are small proof-of-concept applications that demonstrate the value of a new initiative to the company. Pilots are live use of the initiative on a small scale. Pilots should be done before full implementation to understand implementation requirements and target true benefit potential. KM should follow these principles of technology assimilation. Selection of the right demonstration projects with the right executive champion is critical to achieving the momentum required for full-scale implementation. Bear in mind that use by proactive early adopters is the easiest to achieve. The tough part comes in winning over the silent majority. Given that the power of KM comes through its ubiquitous application across company boundaries, the end game is not achieved until widespread deployment is achieved. 854

Knowledge Management: Coming Up the Learning Curve Recommended Reading Davenport, T.H. and Prusak, L. 1998. Working Knowledge: How Organizations Manage What They Know. Boston: Harvard Business School Press. Senge, P.M. 1990. The Fifth Discipline: The Art & Practice of The Learning Organization. New York: Doubleday/Currency. Stewart, T.A. 1997. Intellectual Capital: The New Wealth of Organizations. New York: Doubleday.

855

This page intentionally left blank

Chapter 69

Building Knowledge Management Systems Brent J. Bowman

Why do some organizations prosper and grow while others struggle? There are a variety of interrelated factors that determine the answer to such a complex question, and these factors differ for organizations in different places and at different times. However, one key ability that is common to all successful organizations is effective creation of relevant business knowledge and the timely dissemination of that knowledge to those members of the organization who need it. Effective management of organizational knowledge is becoming recognized as perhaps the most significant factor in determining organizational success: Knowledge has become the preeminent economic resource — more important than raw material; more important, often, than money. Considered as an economic output, information and knowledge are more important than automobiles, oil, steel, or any of the products of the Industrial Age. —T.A. Stewart, 1997

There are several reasons why this organizational capacity has become critical. Organizations face many challenges that were not present just a few short years ago. The volatility and rate of change in business environments requires that organizations continuously adapt and adjust structure and processes to remain competitive. Computing and telecommunications technology enables the capture and distribution of ever-increasing amounts of information at ever-increasing rates of speed. This enables new ways of doing business and allows competitors to rapidly overcome strategic advantages that had seemed secure, if not permanent.

0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

857

FACILITATING KNOWLEDGE WORK A second challenge is introduced by the globalization of markets and labor pools. As organizations establish offices in widely dispersed locations, ensuring that the information necessary to manage the production of the firm’s goods and services becomes a much greater challenge. Coordinating processes when the participants are separated by time and distance is difficult, and solutions to problems encountered in one location must often be rediscovered elsewhere in the organization. The emergence of knowledge-intensive products and services increases the amount of information exchange within and between organizations. Mechanical components in manufactured products have been replaced by microprocessors and software. In the financial industry, complex new financial products such as derivatives have emerged, software-controlled trading programs are common, and securities are routinely traded over the Internet. It is difficult to find managers and professionals in any organization who do not use information technology (IT) to perform their job tasks. All of these factors have led to the emergence of what has been called “information age organizations” (Applegate et al., 1999). These organizations have the following characteristics: networked, team-based workers who are empowered with more decision-making authority; flatter organizational structures with less direct supervision and levels of management; and shorter cycle times to develop new products and bring them to market. Information processing requirements for this type of organization are much greater than in the more stable, hierarchical forms of the past. Given the environmental influences mentioned above, effective management of organizational knowledge is becoming a significant determinant of whether an organization flourishes or fails. For this reason, many organizations are devoting considerable resources to the development of knowledge management systems (KMS). Activities involved in creating, codifying, and distributing information about entities and issues of significance to an organization can be referred to as knowledge management. A KMS is an IT-based system developed to support the organizational knowledge management behavior (Alavi, 2000). The purpose of this chapter is to describe the technologies utilized to enable employees to retrieve the information needed to perform their work tasks. Organizations have used technologies such as document preparation, CAD/CAM/CASE, and artificial intelligence tools to create and codify knowledge for some time. This has resulted in considerable stores of useful information but the tools for making it available to the general audience of potential users are just now emerging. This chapter distinguishes between knowledge management and information retrieval, and describes two different KMS models: the network and repository models. 858

Building Knowledge Management Systems It focuses on the repository model and discusses the technologies used to create knowledge bases and identify features that should be considered when selecting repository tools for an organization. KNOWLEDGE MANAGEMENT VERSUS INFORMATION RETRIEVAL A primary objective of knowledge management activities in organizations is to allow employees to access and utilize the rich sources of information stored in unstructured forms; for example, word processing documents, spreadsheets, and engineering schematics. By unstructured, we mean information that is stored in formats other than traditional databases. There is general confusion about the difference between knowledge management and information retrieval. To understand the difference, it is useful to distinguish between unstructured information stored in documents and multimedia formats, and structured information that is derived from data stored in traditional databases. It is difficult to imagine a modern organization of any size that does not utilize IT applications to capture and store information needed to operate and control its key processes. Applications such as order fulfillment and financial accounting include all the functions to collect, store, and disseminate the structured information associated with these processes. This data can then be retrieved and distributed as reports and queries, or extracted as a spreadsheet for further analysis. The emphasis is on accurate and timely processing of the information necessary to operate and control organizational processes. Data warehouses are also based on structured data. Rather than accurate processing and control of processes, the objective of a data warehouse is to provide a source of internal and external data to support decision making in organizations. External data such as total market sales for the organization’s products are added to internal data gleaned from process automation applications to create rich longitudinal databases. Information retrieval and data analysis tools such as data mining, statistical packages, query language, and extraction software provide powerful capabilities that allow employees to explore relationships between products, sales, and customers. Considerable time and resources have been devoted to creating data warehouses over the past decade (Watson et al., 2001). Powerful as they are, information retrieval technologies based on process automation applications and data warehouses do not provide access to the wealth of information stored in unstructured forms. Knowledge management initiatives in organizations are typically directed at utilizing IT to provide access to this information. Organizations are implementing KMS at an accelerating pace. In a survey of 431 organizations conducted by Ernst & Young (1997), 40 percent reported ongoing or completed KMS projects, and 25 percent planned to implement such a project in the coming year. 859

FACILITATING KNOWLEDGE WORK KNOWLEDGE MANAGEMENT MODELS New technologies are emerging that allow organizations to build KMS that are intended to store and disseminate unstructured information. Recent research suggests that two complementary KMS models are emerging (Alavi and Leidner, 1999; Fahey and Prusak, 1998). The network model utilizes directories and communications technologies to connect knowledge owners with knowledge users. The repository model utilizes IT to capture, organize, store, and distribute explicit organizational knowledge. To further understand the distinction between the two approaches, it is useful to consider the nature of organizational knowledge. Nonaka (1994), building on work done in psychology, classifies knowledge as either tacit or explicit. Tacit knowledge refers to knowledge that is difficult to articulate, action oriented, and based on experience. Examples include the ability of a graphic artist to create attractive images for a product brochure or an experienced maintenance engineer to quickly diagnose many different types of equipment failures. In contrast, explicit knowledge can be articulated in symbolic form. Examples include written documents such as policies and procedures manuals, mathematical or chemical formulas, financial models embodied as spreadsheets, engineering schematics, and training videos. Knowledge is classified by the degree to which it can be articulated, with “tacit” and “explicit” being on opposite ends of a continuum. In reality, different forms of organizational knowledge are positioned along this continuum. A network KMS supports the transfer of both types of knowledge but is particularly well suited for tacit knowledge. In this view, Knowledge work is less a matter of the application of predefined expertise and more a joint product of human interactions with information and intellectual assets delivered through information and communication technologies. —H. Scarbrough, 1999

A network KMS does not attempt to codify knowledge held by an organization’s experts. Rather, it focuses on building knowledge directories. This involves defining the knowledge categories that are relevant to the organization, identifying knowledge owners for each category, and creating a searchable directory to help others in the organization identify and locate knowledge owners. The other component of a network KMS is a rich set of communication and collaboration tools to support the distribution and sharing of the knowledge. PricewaterhouseCoopers (2001) predicts that organizations will recognize that network-based KMS may be effective for ad hoc knowledge sharing but are not true KMS in that they lack the capability to systematically 860

Building Knowledge Management Systems

Exhibit 1.

Structure of a Repository Knowledge Management System

identify and retrieve information created as part of the past experiences and stored as unstructured documents. The repository model views knowledge as an object that can be collected, stored, and disseminated (Alavi, 2000). A KMS based on this model, by definition, focuses on explicit organizational knowledge; that is, knowledge that is captured in some observable form. Exhibit 1 shows the structure of a knowledge management system based on the repository model. Information items that are stored in the knowledge repository, or knowledge base, can originate either internally (i.e., within the organization) or be produced externally. The information items may be any document, graphic, schematic, or audio/video file that is of interest to the organization. Several examples of the knowledge items that might be included in a knowledge repository are shown in Exhibit 1. The software to create and utilize a knowledge repository consists of two interrelated sets of tools: repository creation and management tools, and repository access tools. Exhibit 1 also identifies some of the features and capabilities to look for when selecting knowledge management repository tools for an organization. These features are discussed in the following sections. 861

FACILITATING KNOWLEDGE WORK Corporate intranets have been the technology platform of choice for implementing these applications. Web portals such as Yahoo have evolved as the model of the user interface for accessing a repository. This model consists of keyword search capability and a set of topics or categories that describes and organizes the knowledge items. Repository KMS also provide access to messaging and gateways to other organizational applications. A key task in establishing a knowledge repository is to define the categories of information that will populate the repository. This is referred to as the knowledge map in Exhibit 1. It is critical that this scheme accurately describes the knowledge items and that it represent the topics of interest to the organization as it will be the framework presented to users who wish to locate and retrieve information to answer questions and solve problems. Organizational mechanisms must be put in place to continuously locate and assess new candidates for inclusion in the knowledge base. Once selected, the items must be linked to the predefined category topics, indexed with the appropriate keywords, and entered into the repository. Many organizations, such as Chevron, have established new teams charged with performing these tasks. These teams are also responsible for making employees throughout the organization aware of the knowledge repository and helping them use the information to achieve organizational goals. Ongoing management of the intranet involves selecting and implementing a rich set of search and retrieval tools, providing training on their use, managing security, and monitoring capacity and performance of the site. There are many examples of recent projects undertaken by organizations in an attempt to build such knowledge repositories, often called a knowledge base. Hewlett-Packard developed an application, called the Electronic Sales Partner, that contains over 10,000 current documents related to products, proposals, and sales (Skyrme, 1999). A multi-national chemical company utilized a Web-based knowledge management strategy to create and deliver a comprehensive set of electronically distributed educational and training programs (Pan et al., 2001). Chevron developed a “best practices” knowledge base that captures information about drilling conditions and innovative solutions to problems associated with different drilling situations encountered in its production locations (Sveiby, 2001). For an extensive discussion of knowledge management applications that have been implemented in organizations, see Housel and Bell (2001). It is important to note that the network and repository models are not mutually exclusive. A comprehensive KMS should include explicit knowledge retrieval capabilities such as those described above and a knowledge directory that identifies experts for each of the knowledge categories included in the repository. The option to retrieve the names of the experts 862

Building Knowledge Management Systems on a particular topic should be presented when the topic is selected. A KMS of the type shown in Exhibit 1 could easily accommodate this capability. KMS TECHNOLOGIES There are a number of technologies with functionality that is particularly well suited to KMS. Many of these technologies have evolved over time from origins rooted in other processing tasks. Because knowledge management has received increased attention in organizations, the distinction between some of these technologies has blurred. For example, the functionality of traditional document management systems that focused on managing the creation, maintenance, and version control of documents has been incorporated into groupware/collaboration applications. The current versions of the traditional office suites include simple collaboration capabilities. Exhibit 2 identifies several technologies that have features useful for KMS. Corporate intranets based on World Wide Web (WWW) standards have been the platform utilized for many KMS initiatives. This is due to the ease with which documents and multimedia materials can be linked and made widely available to anyone with an Internet connection and a browser. As the number and variety of information sources linked into an intranet grow, organizing and conceptualizing the collection become problematic. A scheme for categorizing information according to topics relevant to the organization and search engines to assist employees with locating and retrieving selected items are needed. The widespread familiarity and popularity of Internet portals such as Yahoo, which supports the categorization and retrieval of Web sites, has fueled the demand for similar capability with respect to materials on corporate intranet sites. Enterprise information portal (EIP) software has recently emerged as a tool that enables organizations to provide this functionality. These products, which are based on WWW technologies, use the familiar browser interface, and provide access to stored information using keyword search engines and knowledge maps as retrieval tools. Rather than organizing Web sites, they provide access to the variety of documents, design schematics, and multimedia files that make up an organization’s knowledge collection. The EIP products are intended to provide the functionality depicted in Exhibit 1 and described in the subsequent section. The estimated market for EIP software is expected to grow substantially. (See PricewaterhouseCoopers, 2001, for an overview of the various products offered and a forecast of the growth in the EIP marketplace.) FEATURES OF A REPOSITORY KMS A repository KMS needs a rich set of features if it is to satisfy the broad requirements associated with building, managing, and utilizing the reposi863

FACILITATING KNOWLEDGE WORK Exhibit 2. Knowledge Management Technologies Intranets. Corporate intranets may be considered to be early forms of repository KMS. They utilized WWW standards to simplify the tasks of storing and distributing a wide range of materials in different file formats. Web authoring tools. The HTML editors, graphic design tools, proprietary animation packages, and streaming video/audio tools that have emerged since the advent of the WWW provide useful technologies for creating intranets that offer a rich variety of text and multimedia content. Document/content management systems. These tools were initially developed to assist in the creation, version control, storage, and retrieval of complex sets of word processing documents. They have expanded to handle additional file formats and often provide knowledge maps to categorize the stored materials. Search engines. This technology was developed by the library science community to enable bibliographic search and retrieval. It was quickly adapted for the WWW. Search engines provide the ability to index any type of file by keyword for subsequent search and retrieval. Office suites. The traditional office automation suites have added features that support Web publishing and collaboration features such as discussion threads and change tracking. The latest versions can integrate with browsers to provide automatic notification of document changes and Web file management capabilities similar to those for local file systems. Collaboration software. Collaboration tools enable employees to share access to documents in multiple formats, both synchronously and asynchronously. Collaboration tools are not directed at storing, retrieving, or otherwise managing a repository. Rather, these tools assist in the creation of new materials or the modification and refinement of materials that have been retrieved from the repository. Enterprise information portals. This software is designed to create and manage knowledge repositories. Access is via the familiar browser interface. EIP products have a broad range of useful features, including search engines, knowledge mapping, repository personalization, standing queries, affinity group filtering, and simple collaboration.

tory. The features described below may not be present in any single software product or category of product described in Exhibit 2. It may be necessary to assemble a set of tools to provide this functionality. However, the listed features should serve as a checklist for managers who are considering acquiring tools to establish a KMS repository. User Interface Design The ability to control the user interface to the repository and its features is very important. When selecting tools, the repository administrator 864

Building Knowledge Management Systems should consider whether there is a standard interface or whether it can be customized to be consistent with other organizational applications. Common Web portals such as Yahoo provide models that are well understood and accepted, but the ability to add specialized functionality that is specific to your organization is important. Of course, customizing an interface adds time and cost to creating the repository, so ease-of-use must be considered as well. Text Search and Retrieval One of the most obvious features of a KMS repository is the ability to retrieve information based on keyword searches. The ability to process Boolean searches (i.e., retrieval on the basis of keywords joined into complex expressions using the AND, OR, and NOT operators) is the most basic feature of a search engine. However, there are many additional features that can enhance the engine’s utility. A user who wishes to retrieve information on a particular topic must select the correct keywords that were assigned to the document at the time it was indexed and added to the repository. Strict Boolean expressions are evaluated as either true or false. A search engine that supports fuzzy logic allows the retrieval of materials that are “mostly” true. This would allow retrieval of documents that are not necessarily perfect matches for the search expression, but may in fact have relevant information. Relevance ranking of search results is a feature that takes on increased significance as the size of the repository increases. Everyone has experienced the case where the results of a search are so voluminous as to be useless. A search engine should help users narrow down the items to be reviewed by ranking the items according to their relevance to the search expression. Techniques for ranking documents include counts of the keywords in the body, title, and footnotes, ranking based on the occurrence of least common terms in the search expression, and Bayesian inferencing (i.e., scoring the frequency of the search term in a document relative to the frequency of occurrence in the entire collection). A single search engine can support multiple ranking strategies. For organizations operating in particularly volatile environments, the emergence of new terms is an ongoing challenge. For example, new technical terms and acronyms may require that the entire collection of materials be re-indexed. Common use of terms such as “knowledge management” and “enterprise information portals” is a recent phenomenon. The ease with which new terms can be defined and associated with existing materials in the repository can be an important consideration in the ongoing usefulness of a search engine. 865

FACILITATING KNOWLEDGE WORK Another issue when evaluating search engines is the nature of the interface for initiating a search. The simplest method for the tool vendor to implement involves typing the Boolean expression into a textbox. Other methods are available that are more intuitive and easier to use. For example, form-based approaches permit users to enter keywords into forms and join them into complex search expressions by point-and-click selection of operators. Natural language interfaces allow users to type a query just as they would ask the question. The utilization of natural language interfaces has become more commonplace with the advent of Web search engines such as AskJeeves. Multimedia Search and Retrieval Does your organization have a collection of audio and video files? Some firms have accumulated considerable quantities of these types of materials for training and marketing support, or as a result of customer service/help desk monitoring or videoconference recording. If audio/video materials are to be accessible via the KMS, the files must be indexed and available for retrieval using the search engine. Manual indexing of these materials is always an option but this can become very time-consuming for large organizations. Products based on speech recognition technology have recently become available that convert the audio information to text to support keyword searching. Another approach is to search the closed-caption text. Indexing a video on the basis of the images is much more problematic. Technology to classify, search, and index images in video files is under development but it is not yet mature (PricewaterhouseCoopers, 2001). This feature might be expected to undergo considerable refinement and become more widely available in the next generation of KMS repository tools. Knowledge Mapping The definition of knowledge maps, or taxonomies, to group the materials in the repository into categories relevant to the organization is another fundamental feature of a repository. Once the materials in the repository are linked to the knowledge categories, this feature becomes an alternative retrieval method that supplements the search engine. In some cases, users may be uncomfortable identifying keywords and may wish to start a search by looking at the predefined knowledge topics. This can reduce the number of irrelevant materials that are often identified by the search engines. Here again, the Web portals with their widely used taxonomies have established this functionality in the minds of users. The taxonomies are typically hierarchical, but multidimensional views of linked documents/categories are starting to become available. 866

Building Knowledge Management Systems Definition of the knowledge categories can be accomplished by human designers or through the use of software tools that analyze a collection of materials and suggest a classification scheme based on the observed content. These tools utilize techniques such as word co-occurrence, noun phrase extraction, and Bayesian inferencing to develop recommended taxonomies (PricewaterhouseCoopers, 2001). The current state of the automatic classification technology is such that it is advisable for humans to review and refine the recommended classification scheme. In addition to human- and software-defined taxonomies, standard templates for different industries are starting to emerge and are included in many EIP products. Maintenance of the knowledge taxonomy scheme can become a significant challenge. New categories that are important to an organization may emerge and others will decline in today’s rapidly changing environment. This problem is similar to the keyword indexing maintenance problem previously discussed. The ability to add, merge, and delete topical categories is critical to maintaining a current taxonomy. Furthermore, it is important that the repository technology provide the ability to link existing materials to the new categories, regardless of whether the new terms appear in the documents. Automatic reclassification becomes critical as the size of the repository increases. Personalization The vice president of sales and marketing may not be interested in new information associated with engineering techniques. Likewise, a distribution manager does not need access to tax rulings pertinent to his industry. A personalized view of repository contents, wherein only those categories that are relevant to a particular job title are displayed, reduces complexity and helps employees focus on topics germane to their job. Personalizing a view of the repository by creating user profiles can be manually performed by the repository administrator or automatically by creating profiles based on terms in e-mail headers or historical retrieval patterns. Users should have the ability to modify their profiles as their job duties and interests change. Standing Queries The repository engine should be able to notify users whenever new repository entries matching their interests are received. This is accomplished with a technique often referred to as a standing query. All new materials must be indexed prior to entry into the repository. The keywords are compared to those supplied with the list of standing queries and the appropriate users are either notified or automatically sent a copy of the new material. The repository engine should permit the administrator to define global standing queries that relate keywords to all users with particular 867

FACILITATING KNOWLEDGE WORK profile characteristics (e.g., job title). Users should also be allowed to post individualized queries. Affinity Group Filtering Another technique for filtering relevant information for specific users is referred to as affinity group filtering. Over time, tables defining topic preferences are created for each user. The preferences may be explicitly supplied by the user or automatically derived from historical retrieval patterns. Once the tables have been established across the entire organization, employees with similar preferences can be identified as affinity groups. If one member finds new information useful, the document can be forwarded to other members of the group. For example, if a manufacturing plant manager in one region creates a cost analysis document and enters it into the repository, manufacturing staff in other regions would be automatically notified. Knowledge Directories Knowledge networks were described above as being a separate model of KMS. These networks identify experts who have knowledge about the categories included in the knowledge taxonomy and provide contact information. However, as indicated, the repository and network models are not mutually exclusive. As a user searches the knowledge map, the repository engine should identify the experts for each topic, in addition to the collection of stored materials. Links to the organization’s messaging application would permit a user with a question to immediately contact the expert via e-mail. Collaboration and Messaging Providing links to the messaging application is a first step in creating contact between knowledge owners and users. However, much richer forms of collaboration are available and should be supported in a comprehensive KMS. As workgroups become more dispersed, simply retrieving past documents and other knowledge materials is not sufficient. For example, an employee team charged with preparing a proposal to a client could retrieve past proposals, which, while extremely valuable as a starting point, would have to be modified and refined to meet current needs. Collaboration allows team members who are separated in time and space to share access to the materials necessary to complete the new proposal. These materials could include project documents, workplans, personal schedules, discussion groups, etc. A KMS that provides internal collaboration capabilities or is integrated with the organization’s collaboration application adds considerable value for the user. 868

Building Knowledge Management Systems Gateways to Enterprise Applications and Other Computing Resources Construction of a truly comprehensive KMS requires that users have access to all the organization’s information and computing resources. The portal concept implies a single interface for employees, which provides all the information necessary to perform their jobs. A cost analyst may wish to access materials in the KMS but will also need access to the financial applications. Likewise, marketing staff will need access to order and customer data. A link or gateway to the enterprise financial, order fulfillment, and HRM applications is a valuable addition to the KMS. Gateways to forms automation applications and the Internet are also needed. Of particular importance is the ability to access the structured information in the organization’s data warehouse. As discussed above, the focus of KMS repositories is to store and disseminate information stored in other formats, but a data warehouse and data mining and extraction technologies are also powerful tools. Incorporating access to the data warehouse is an important step toward the single corporate information portal that satisfies all organizational information requirements. REPOSITORY MANAGEMENT TOOLS The discussion thus far has viewed KMS from the standpoint of the functionality provided to the users. However, building a successful repository requires a variety of tools to manage the repository itself. Features such as re-indexing the collection with new keywords and reorganizing the knowledge taxonomy scheme have already been mentioned. The ease with which new materials are added to the repository is an important consideration. Is the repository a series of hyperlinked files, or does it require a separate storage structure? The latter approach requires tools to import and cleanse documents prior to entry. The repository will require metadata that includes the taxonomy scheme, links to documents, user authentication data, user historical access profiles, affinity groups, user security profiles, and document access restrictions. Effective management of this information will be critical to the long-range success of the KMS. Security is also a major concern. User authentication and the ability to set access restrictions are critical. An organization may want to control access by individual document, category, or geographical location. Open access to the collection of the organization’s knowledge is the primary objective of a KMS but management may want some materials to be restricted. For example, some documents may be intended for only seniorlevel managers or only those employees from a particular geographic region. The repository administrator also needs the ability to create user security profiles where access permissions are specific to individual users. 869

FACILITATING KNOWLEDGE WORK This KMS should allow security restrictions to be assigned to both repository contents and individuals. The administrator will also need tools to monitor performance of the system and to track utilization. Utilizing the KMS is at the discretion of employees. Assuring adequate processing capacity for the KMS is an essential first step in ensuring adequate user satisfaction. Employees are less likely to use a KMS if they perceive problems such as poor response time. Tracking utilization patterns is also important in managing the repository. Knowing which individuals are retrieving which materials is necessary to determine the success of the KMS and provides insight into which knowledge categories are reviewed most often. Analysis of this information may suggest ways to reorganize the knowledge taxonomy or identify topics where the collection needs to be expanded. CONCLUSION KMS have the potential to increase productivity in organizations manyfold. Organizations are becoming more geographically dispersed and are adopting new work arrangements such as virtual teams and telecommuting. This, coupled with the ever-increasing rate of change in the environment, has led to a situation in which the ability to leverage an organization’s knowledge assets has become a critical determinant of organizational success. Knowledge management systems are a potentially valuable tool to achieve this goal. The tools to create KMS are not yet mature but the EIP products that have recently appeared on the market show great promise. This chapter identifies and describes KMS models, technologies, and the features that would be expected in a comprehensive system. It is possible that no single product includes all of these features at this time. KMS designers may be faced with integrating multiple products to create an environment with the desired flexibility and power. Selecting the tools and creating an effective KMS is only one of the management considerations in maximizing the benefits inherent in an organization’s collected knowledge. Organizations that have pioneered KMS initiatives have experienced a variety of problems. Experts in organizations have not always been forthcoming in codifying their expertise for entry into a KMS because they see it as proprietary and a source of personal advantage. Organizations must find incentives that encourage experts to provide useful materials and that encourage others to use them in completing their job tasks if the potential of KMS is to be realized. Providing the organizational mechanisms to identify, enter, and maintain the materials for the KMS and training employees in its use is time-consuming and costly. 870

Building Knowledge Management Systems A final challenge is measuring the contribution of the KMS. Intuitively, there is great value in effectively leveraging an organization’s knowledge assets, but how the economic value should be defined and measured is less clear. An investment in a KMS becomes an infrastructure investment whose benefits are often subtle and intangible. However, without some estimate of value, it can be difficult to obtain the funding necessary for a KMS initiative. It seems obvious that we have much to learn before KMS will truly fulfill their potential. References Alavi, M. and Leidner, D., “Knowledge Management Systems: Issues, Challenges, and Benefits,” Communications of the Association for Information Systems, an electronic journal, 1(2), 1999. Alavi, M., “Managing Organizational Knowledge,” Framing the Domains of IT Management, Zmud, R., Ed., Cincinnati: Pinnaflex Educational Resources, 2000. Applegate, L. et al., Corporate Information Systems Management: Text and Case, Boston: McGraw-Hill Irwin, 1999. Ernst & Young Center for Business Innovation and Business Intelligence, 1997. Fahey, L. and Prusak, L., “The Eleven Deadliest Sins of Knowledge Management,” California Management Review, 40(3), 265–280, 1998. Housel, T. and Bell, A.H., Measuring and Managing Knowledge, Boston: McGraw-Hill Irwin, 2001. Nonaka, I., “A Dynamic Theory of Organizational Knowledge Creation,” Organizational Science, 5(1), 14–37, February 1994. Pan, S.L., Hsieh, M., and Chen, H., “Knowledge Sharing through Intranet-Based Learning: A Case Study of an Online Learning Center,” Journal of Organizational Computing and Electronic Commerce, 11(3), 179–195, 2001. PricewaterhouseCoopers, Technology Forecast: 2001-2003, PricewaterhouseCoopers, Technology Center, Menlo Park, CA, 2001. Scarbrough, H., “Knowledge as Work: Conflicts in the Management of Knowledge Workers,” Technology Analysis and Strategic Management, 11(1), 5–16, 1999. Skyrme, D., Knowledge Networking: Creating the Collaborative Enterprise, Boston: Butterworth Heinemann, 1999. Stewart, T.A., Intellectual Capital: The New Wealth of Organizations, New York: Doubleday/Currency, 1997. Sveiby, K., “What is Knowledge Management?,” Web site (accessed October 2001), http://www.sveiby.com.au. Watson, H., Annino, D., Wixom, B., Avery, K., and Rutherford, M., “Current Practices and Data Warehousing,” Information Systems Management, 18(1), 47–55, 2001.

871

Chapter 70

Preparing for Knowledge Management: Process Mapping Richard M. Kesner

Within an enterprise, knowledge management (KM) is that process which identifies and brings to bear relevant internal (i.e., from within the enterprise) and external (i.e., from outside the enterprise) information to inform action. In other words, information becomes knowledge as it enables and empowers an enterprise’s workforce, known in this context as its knowledge workers. The information components of any KM process come in one of two forms: 1. Explicit knowledge: structured and documented knowledge in the form of written reports, computer databases, audio and video tapes, etc. 2. Tacit knowledge: undocumented expertise in the heads of the enterprise’s knowledge workers or external third-party subject or process experts The task of a KM process is to both organize and disseminate explicit knowledge and to bring together knowledge workers and the appropriate explicit and tacit information required for their assignments. It is perhaps obvious that the greatest challenge to those charged with KM is the targeting and conversion of tacit to explicit knowledge and its timely delivery to those in need. Prior to the advent of the Internet, many business leaders simply viewed KM as an expensive and somewhat frivolous undertaking. However, as it has become clear that well-informed employees and informated business processes positively impact corporate performance, executive resistance 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC

873

FACILITATING KNOWLEDGE WORK to KM has softened. The arrival of the ubiquitous Internet has provided organizations with a relatively low-cost platform for information gathering and sharing. In brief then, the value proposition for KM is as follows: KM optimizes the value of what your people know in delivering value to your customers, and hence KM serves as an enabler of performance excellence. KM is all about enabling, empowering, directing, and energizing the workforce in fast-paced, evolving, and increasingly virtual enterprises. A partial list of the tangible benefits from such a KM program might very well include: • A better understanding of the marketplace and customer needs • A more effective sales process as measured in terms of customer retention • Faster and higher-quality product/service time-to-market • Reduced operating costs and overhead • Reduced new venture/current operating exposure • A higher level of innovation • The broad-based adoption of industry and process best practices • Higher employee retention Of course, the suppliers of what currently comprises information technology product and service delivery make the same benefit claims. The author has no intention of overselling KM. It can work for you, but getting there is not easy! It is the purpose of this practicum to tie a field-tested KM process to your enterprise’s key business process drivers and, in so doing, open the reader’s eyes to the possibilities. ARCHITECTING A KM PROCESS Each enterprise’s business processes dictate their own information needs and therefore their associated KM requirements. Although the typical enterprise has no more than six or so key business processes, each organization more actively focuses on some subset of these processes, depending on the nature of their business, their marketplace, and the nature of their competitive challenges. An example set of key business processes within an enterprise might include: • Lead generation: the processes of market and competitive analysis, market awareness/branding, customer prospecting, and prospect profiling and identification • Sales cycle management: the process of converting prospects into customers • Delivery management: the “manufacturing”1 process, including customer and engagement management • Distribution management: the process of product/service delivery to the customer 874

Preparing for Knowledge Management: Process Mapping • Financial management: the processes of accounting, accounts receivable administration, accounts payable administration, financial controls, and reporting • Human resource management: the processes of payroll and benefits administration, staff recruiting and retention, and staff training and development The reader should note that the aforementioned logical groupings of business processes capture most, if not all, activity within an enterprise. Once an organization’s management team has settled on a high-level categorization scheme for its key processes, it will need to identify one or more of these to serve as the focus for initial KM efforts. Typically, the team will select that process in greatest need of KM’s benefits. For example, if the sales force has just been expanded in an enterprise or the enterprise faces problems with customer retention, the team might select sales cycle management as a KM focus. Similarly, if the enterprise cannot compete in terms of the cost effectiveness of its manufacturing process, the team might choose delivery management for a KM-enabled makeover. Whatever the ultimate choice, remain focused and build upon the initial, limited forays into process analysis and KM. To assist in selection, the management team should consider the scope of information to be collected, categorized, and managed. To this end, the author offers a simple, knowledge-components model within which the reader may characterize all enterprise knowledge assets: • Marketplace/customer knowledge: information about business barriers and opportunities, customer profiles, prospect profiles, market demographics, etc. • Content knowledge: information about the enterprise’s actual products and services, performance history, staff competencies, intellectual and physical assets, etc. • Process knowledge: information about how the enterprise manages itself in terms of key processes (e.g., solution selling, manufacturing, distribution), as mentioned above Some of this content will come from internal sources (either explicit or tacit) and the rest will come from external sources (almost entirely explicit). While all business processes require information from all three families of knowledge components, each key process will differ in terms of its relative need for particular informational elements. By employing this simple lens, the enterprise’s management team can quickly focus on those business processes to be enabled through its KM efforts. From the outset, consider the following basic steps to implement a KM process. First, executive leadership must appreciate the need for and the 875

FACILITATING KNOWLEDGE WORK value of the systematic management of the enterprise’s knowledge assets. The true test of such an appreciation will be the formal chartering of a KM process and the commitment of adequate funding for the effort. The charter should define KM team roles and responsibilities, desired outcomes, and process metrics. As its initial set of activities, the KM team would then conduct high-level process mapping to derive KM requirements. As a next step, the team would complete a knowledge asset inventory and gap analysis. Armed with this information, the team can then proceed to design a knowledge components repository and its associated retrieval taxonomy.2 The final phases of work entail the construction of a KM platform for the actual collection, storage, and retrieval of content and a KM process that promotes the systematic collection and subsequent reuse of knowledge assets. To successfully deliver an enterprise-level knowledge management solution, an organization must devote dedicated resources to the undertaking. The specific players include: • An executive sponsor: sanctions the effort, clears away political/turf barriers, and funds the activity • A chief knowledge officer: may hold other roles within the organization as well but, in terms of the KM effort, serves as its chief architect, catalyst, and enterprise liaison • Working clients: the formal and active owners of the process(es) to be informated through KM • Knowledge workers: the enterprise’s employees who are in possession of both the organization’s explicit and tacit knowledge and those who will ultimately employ KM services in their work • The KM core team: one or more professionals whose primary focus is the construction of the KM process, including knowledge component collection and cataloging and the creation and ongoing maintenance of the KM automated platform With the enterprise committed, funding in place, and roles and responsibilities defined, the KM core team will work with their working clients and appropriate knowledge workers to map out the process(es) that will serve as the focus for KM. Process mapping will in turn generate the framework and blueprints for a formal KM solution. BUSINESS PROCESS MAPPING The author’s business process mapping methodology entails seven key sets of deliverables. These elements may be summarized as follows: 1. Process decomposition, including: • A brief process definition and overview • A list of process assumptions and operating principles • A process workflow 876

Preparing for Knowledge Management: Process Mapping

2. 3. 4. 5. 6. 7.

• A list of process inputs, outputs, and associated customer deliverables • A map or checklist of the information technologies that enable, automate, or informate the process A roles and responsibilities matrix A set of approval rules governing the process A list of access rights (to data, systems, etc.) for those operating within process Performance metrics that measure process outcomes Process templates, frameworks, standards, and tools (i.e., the reusable components of the process) The so-called “knowledge library” of explicit process knowledge, such as scenarios, case studies, best-in-class examples, and linkage to internal and external supporting information resources

Together, these mapping components provide a complete picture both of the business process itself and of its underlying knowledge management requirements. The remainder of this chapter examines these mapping elements in some detail, employing the Solution Selling process within the XYZ Consulting Firm as a case study example. However, before proceeding, the author would like to make the case, albeit briefly, for employing a business process mapping methodology to launch a KM effort. First and foremost, business process mapping focuses on the underlying business value(s) of the process under consideration. If the enterprise is to invest its human and financial capital in KM, the value proposition for that investment must be clear and cogent from the outset. In so doing, the proponents of KM will enjoy the ongoing support of key executives, allowing the core team to devote its energies to process execution. Mapping also helps define business priorities and particular problems around scrap and rework, poor communication, and an inability to leverage knowledge assets that KM will help ameliorate. Mapping also creates a holistic view of the current-state realities around business process delivery. Following this, mapping leads to gap analysis and the definition of a desired state involving knowledge worker enablement, collaboration, and resource sharing and leveraging. Of course, the devil is in the details. So without further delay, consider the elements of business process mapping. PROCESS DECOMPOSITION At the highest level, process decomposition reveals the rationale behind each key business process and its underlying assumptions and operating principles. It requires a clear and complete high-level map or workflow of how the enterprise should execute the process, citing all its essential process components and deliverables. As a complement to this map, the core team should prepare a table that defines the inputs, outputs, and deliver877

FACILITATING KNOWLEDGE WORK ables for each process step. This element documents the “value chain” of the process under analysis. Finally, as part of process decomposition, the team maps major process components to their associated enabling technologies. In the example, Solution Selling for the XYZ Consulting Firm, the business process may be defined as follows: • A team-based, problem-focused selling process that proactively identifies business opportunities and presents the value proposition for addressing those opportunities to targeted, qualified clients • A holistic process that encompasses diagnosing, proposing, and closing a business proposition that solves a client business problem • Selling solutions that are tailored and customized to the particular needs of a given client although the actual deliverables may be cast from standardized components • Many times a cross-organization, team-sell approach; intense collaboration to derive the right result • Not a product sheet and a price, but a process that draws on unique competencies and knowledge bases to create a complete solution for the client Note that the aforementioned definition is concise and explicit, providing all interested parties with a clear sense of what the firm means by Solution Selling. Also note the references to knowledge bases, implying an underlying KM component to the process. The assumptions and operating principles for Solution Selling further clarify the process’s working definition: • Solution Selling is an iterative process. • Negotiating and communicating with the client throughout the process is key: antennas must be up for client pain, restraints, preferences, etc. • Price should not be discussed with the client for the first time in the proposal; however, it should be addressed as early as possible — certainly during scoping and scope agreement. • Consultants must involve the sales department because this department contacts clients or prospective clients, and these efforts must be recorded in the appropriate relationship log in either the sales force automation system or the professional services administration system. • Whoever initiates a Solution Selling instance must get others involved as early as appropriate to ensure success of the process. • The process must be appropriate with the business. • Where possible and appropriate (e.g., for off-the-shelf products and services) payment in full should be required up front. • There can be no new work without a credit check and clearing of past balance due accounts. 878

Preparing for Knowledge Management: Process Mapping Here, the firm’s process owners have clearly articulated norms of behavior and performance standards, if not desired process outcomes. From the KM core team’s perspective, this information begs the question: “how can KM enable and empower this process?” To answer this question, the KM team should identify and map the entire process in some detail and then examine each process activity for opportunities. As a start to formal mapping, simply make a list of all key process steps in sequential order. Do not be concerned about leaving out details or the rigor of the sequencing. The list is merely an aid in drawing the process map. The latter artifact will accommodate concurrent activities. Its execution will also solicit process steps from working client(s) and the core team that may have been neglected. The list for the Solution Selling process steps would be as follows: • • • • • • • • • • • • • • • • •

Target prospect. Contact and interact with the client. Identify and define opportunity. Validate and agree on problem/opportunity (the firm and client). Qualify client. Identify buying criteria. Scope potential assignment. Agree on scope with client. Set price (start negotiating with client). Obtain senior management approval and sign-off of proposal to client. Prepare and present proposal (continue negotiating with client). Prepare and present contract (finalize negotiations with client). Obtain purchase order/PO number from client prior to the start of the assignment. Kick off project. Track interim process results. Track process outcomes. Report on process results to appropriate project stakeholders.

With this checklist, the time has come to develop a process map (Exhibit 1). Note that the example of a process workflow in Exhibit 1 is not particularly detailed nor does it need to be. As the reader will observe, other process mapping elements complement the actual flow diagram and fill in the particulars. Nevertheless, the aforementioned diagram does capture primary process steps, decision trees, roles and responsibilities, rules, and outcomes. With this blueprint in hand, the core team and the working client partners will next create a simple matrix that lists: • • • •

Each process step The inputs and outputs of that step The party(ies) responsible for that step’s execution and deliverables The customer and process deliverables emerging from that step 879

FACILITATING KNOWLEDGE WORK

Exhibit 1.

Process Map

This exercise makes explicit what was perhaps implied in the workflow. Furthermore, it has the more incremental elements of the process as well as outcomes and responsible parties. In the case of the Solution Selling example, the matrix appears in Exhibit 2. To conclude the decomposition activities of the mapping process, the author has found it useful to capture the current state of those information technology systems that enable the business process being mapped. For example, in the case of Solution Selling, the XYZ Consulting Firm employs a lead/sales campaign management system, a customer relationship management (CRM) system, and a professional services administration system (for engagement management and delivery and for project cost accounting), represented in Exhibit 3. The importance of this modest step may not be apparent, but in acknowledging the information systems that currently support the busi880

Exhibit 2. Solution Selling Matrix Inputs/Outputs Described

Role Owner(s)

Customer/Process Deliverables

Target prospect

Inputs: market/prospect research and analysis Outputs: targets agreed to

Consultants, Sales, BUs

List of qualified targets

Contact/interact with the client

Inputs: qualified targets Outputs: prospects approached

Consultants, Account managers

Approach to specific target

Identify/define opportunity

Inputs: industry, market, prospect information Outputs: specific opportunities defined; if an off-the-shelf product, go directly to contract

Sales, Consultants

Sufficient information to craft an approach for prospect; contract

Validate/agree on problem/ Inputs: opportunities; ideas presented opportunity Outputs: consensus on focus for sales/delivery team

Sales, Delivery, Client

Consensus with client on opportunity under consideration

Qualify client

Inputs: credit checks, client financials and information Outputs: green light on client’s ability and willingness to pay; determination of client’s price tolerance

Sales, Finance

Determine client’s ability and willingness to pay; price tolerance

Identify buying criteria

Inputs: client information; competitive intelligence Outputs: alignment of the firm’s selling process with prospect’s buying process

Sales

Determine how decision is to be made and by whom; competition

Sales, Delivery

Scope document

Scope potential assignment Inputs: client information, process intelligence Outputs: scope document

881

Preparing for Knowledge Management: Process Mapping

Process Step

Process Step

Inputs/Outputs Described

Role Owner(s)

Customer/Process Deliverables

Agree on scope

Inputs: scope document Outputs: consensus on scope

Sales, Delivery, Client

Consensus with client on scope

Set price

Inputs: scope document; pricing experience Outputs: pricing for proposal

Sales, Delivery

Based on scope, price determined

Prepare/present proposal

Inputs: scope document and pricing decision; resources availability Outputs: proposal

Sales, Delivery

Proposal delivered

Prepare/present contract

Inputs: proposal and negotiations Outputs: contract

Sales, Delivery

Contract delivered

Kick off project

Inputs: contract Outputs: engagement and delivery plans

Engagement management team

Project initiated

FACILITATING KNOWLEDGE WORK

882

Exhibit 2. Solution Selling Matrix (continued)

Preparing for Knowledge Management: Process Mapping

Exhibit 3. Information Technology Systems that Enable the Business Process

ness process, the KM team identifies both potential sources of knowledge content and potential technology platforms for the construction of a subsequent KM solution. For example, XYZ Consulting’s CRM system tracks sales cycle data. With this information, the KM team could establish a knowledge service that links new sales personnel with sales veterans who possess the requisite expertise. Similarly, the professional services system already houses project engagement knowledge that could be shared with new service teams faced with similar assignments. In total, the elements of process decomposition provide the KM core team with a rich understanding of the business process, its knowledge requirements, and the information artifacts that it generates for sharing and reuse. The Roles and Responsibilities Matrix As part of the process decomposition effort, the KM core team has already prepared an input/output/deliverables matrix. Next, the team should map clearly defined and specific roles and responsibilities for each process step. This activity is essential to ensure the commitment of each process participant to his or her role within business process delivery. For the KM core team, this information is also essential in identifying those parties who will create or provide knowledge components for the use by others within the process value chain. In addition, this mapping element often documents the sequencing of knowledge component handoffs within the larger business process. It should come as no surprise that some of those involved in process delivery are unwilling to acknowledge their actual roles and responsibilities. The mapping process helps these persons face up to their own accountability. For some, the information in the matrix may come as a real surprise. Whatever the particular circumstances within the organization, it is essential that all business process stakeholders recognize and assume responsibility for their process roles. By achieving this end, the KM core team will find it less difficult to build KM assignments into the process and 883

FACILITATING KNOWLEDGE WORK to more clearly define information handoffs, reuse opportunities, and so forth. The actual execution of a roles and responsibilities matrix is illustrated in Exhibit 4. Note that the matrix itself should follow the same order as the process workflow map and should use clear, unambiguous language in the definition of assignments and outcomes. Do not be surprised if this step, which might be viewed by some as a formality, becomes a major point of contention. Typically, stakeholders do not always want to be reminded of their responsibilities, nor do they wish to have their accountability documented as part of a KM exercise. PROCESS RULES Processes are governed by various sets of rules. Workflow steps, task assignments, and customer deliverables are all part of the framework that governs a business process. Similarly, each process has its decision points and associated approval workflows. If automated systems are to be employed, approval rules must be explicit so that the rule-based engines in these systems may be appropriately programmed for use within the process. In the Solution Selling example, the process approval rules include: • Sales and business unit (BU) leadership approves targeted prospects. • Sales leadership approves contact person/process for initial prospect contact. • Consultants must inform sales management when they contact an existing client or targeted prospect concerning a new opportunity. • The assigned Solution Selling team (made up of the sales department and consultants as the case may be) develop the initial opportunity analysis. • The appropriate sales/delivery team deals with the prospect on the problem statement. • Sales and finance management qualify the prospect. • Sales, delivery, and finance management approve pricing and the proposal being sent to the prospect. • Sales and finance management and the legal department sign off on the contract before it goes to the prospect. • Delivery drives the kickoff process with the support of the sales department. Rules such as these may direct the flow of information from one process stakeholder to the next and therefore influence the overall design on the underlying KM platform of the process. Yet another consideration in process design is rules governing stakeholder access. Depending on the sensitivity of the information employed by a particular process, and therefore stored within its enabling KM sys884

Exhibit 4. Actual Execution of a Roles and Responsibilities Matrix Customer/Process Deliverables

Responsibilities

Consultants, Sales, BUs

Collaborate on defining targets; BU and sales leadership to provide direction, build consensus, determine priorities

List of qualified targets

Consultants, Account managers

Appropriate employee/team to approach the targeted prospect and to gather information — both in the marketplace and from the prospect

Approach to specific target

Sales, Consultants

Designated Solution Selling team to distill information into a specific set of opportunity scenarios and options

Sufficient information to craft prospect approach

Sales, Delivery, Client

Designated Solution Selling team to meet with prospect to explore a specific set of opportunity scenarios and options and to agree upon a focus and priorities, level of interest

Consensus with client on opportunity under consideration

Sales, Finance

Due diligence on prospect to ensure that there are no bills outstanding and that the prospect is in a position to fund the envisioned engagement

Determine client’s ability and willingness to pay; price tolerance

Sales

Due diligence to ensure that the firm follows the prospect’s “buying” process, talks to the right people, and knows who the competition is

Determine how decision is to be made and by whom; competition

885

Preparing for Knowledge Management: Process Mapping

Role

Customer/Process Deliverables

Role

Responsibilities

Sales, Delivery

Prepare a cogent scoping document upon which one can deliver if engaged

Scope document

Sales, Delivery, Client

Reach consensus and commit in terms of client’s needs, firm’s capacity, etc.

Consensus with client on scope

Sales, Delivery

Due diligence to set price based on past delivery/pricing experience

Based on scope, price determined

Sales, Delivery

Prepare, deliver, and present proposal; reach agreement

Proposal delivered

Sales, Delivery(?)

Prepare, deliver, and present contract; get signoff

Contract delivered

Engagement management team

Ensure there is clarity and commitment in the handoff from the sales team to the engagement team executing the contract

Project initiated

FACILITATING KNOWLEDGE WORK

886

Exhibit 4. Actual Execution of a Roles and Responsibilities Matrix (continued)

Preparing for Knowledge Management: Process Mapping tem, the enterprise will exercise control over knowledge worker access rights. In a fairly open corporate culture, where particular knowledge base content is generally free of legal and operational concerns, access rights and individual user profiles may be broadly defined. However, in many instances, rigorous access rules may be required. During the mapping process, the KM core team will collect access requirements and build standard user access profiles as required. In the XYZ Consulting Firm, Solution Selling data, although highly confidential to the outside world, is readily shared within the organization. The firm therefore employs rather modest access rules: • Sales and delivery teams will have full access to all enabling systems, including the sales force and professional services administration systems as required. • Sales and delivery teams will have full access to all data, templates, tools, etc. via the intranet and the professional services administration system portal. • Modeling templates will retain individual employee confidentiality concerning salaries, etc. to the extent possible, while providing sales and delivery teams with the information they require to scope and commit to projects. Note the clear and unambiguous implications of these rules for the eventual design of an enabling KM platform. PERFORMANCE METRICS Every key business process will employ a series of largely quantifiable performance metrics that measure both process effort (execution) and results (business objectives achieved) in a meaningful fashion. Surprisingly, the KM core team may discover that the process under examination either lacks performance metrics or employs surrogates that measure activity instead of results. As part of process mapping, standards and mechanisms of measurement will be identified and documented. Wherever possible, meaningful, results-oriented metrics should be deployed. As appropriate, actual measurement should take place through the automated system(s) that enable the business process in question. However, even automated systems may not lend themselves to meaningful measures of success, necessitating creative solutions in which the KM core team may play a supportive role. As a case in point, XYZ Consulting’s Solution Selling performance metrics are listed for the reader’s consideration. Not all of these measures are “hard” and, to be honest, some reflect wishful thinking. • Close rate: — Dollars closed versus dollars in pipeline 887

FACILITATING KNOWLEDGE WORK







• •

— By project and in total: dollars proposed versus dollars committed to in the contract (i.e., contract value) — Trends (e.g., types of products or services sold, types of customers) Volume: — Dollars scoped per client opportunity (average-sized deal) — Proposed opportunities versus closed opportunities (closure rate) — Total dollar volume Cost of sale: — Time charged by all participants in Solution Selling process versus total dollar volume Sales cycle time: (1) opportunity definition, (2) scoping, (3) proposal completion, (4) contract completion: — Duration of process from initiation to close — Duration of key process elements Project hurdle rate: — Price of proposed deliverables per hour of labor “Wins”: — Success stories, qualitative information

As knowledge managers, the core team will help stakeholders define appropriate measures and will then build a KM platform that both captures those measures over time and provides longitudinal analysis of business process results. PROCESS TEMPLATES AND TOOLS During mapping, the KM core team will discover that most explicit process knowledge and best practices are captured in the templates and tools employed as part of that process. The KM core team will identify both those templates and tools already in place and those required to further “informate,” streamline, and enable the process. Each template or tool should be viewed as a knowledge component. As part of the KM platform design, these components will be linked to their respective process steps for ease of reference, retrieval, and reuse. Do not be surprised if mature business processes possess numerous, duplicate templates, style sheets, and the like. As part of the KM value proposition, the team will rationalize this body of work, distilling and promoting best practice and enterprisewide standards. The XYZ Consulting Firm’s KM core team uncovered the following from process templates and tools at Solution Selling: • Opportunity/problem assessment: — Product or service diagnosis tools — Assessment templates — Previous working models and sizings 888

Preparing for Knowledge Management: Process Mapping







• • •

— Case histories and related deliverables Qualification of the client: — Qualification questions — Qualification techniques Buying criteria: — Buying questions — Buying techniques project scoping — Templates — Scoping questions — Client or prospect information — Pricing guidelines — Project/delivery methodologies Pricing: — Models; rate sheets — Workplans Proposal templates Contract templates Kickoff template(s)

In building a KM platform, the payoff will come in linking standardized templates and tools to the business process itself so that when a stakeholder arrives at a given process task, the appropriate templates, etc. are automatically presented for use or reuse. When multiple options exist, the user will need to be directed to the correct form. When no tools are in place, the KM core team will collaborate with its working client(s) to create said standards. There is nothing particularly difficult about this KM task. It merely requires a complete understanding of the business process and its stakeholders, as derived from the mapping effort itself, and a disciplined, objective sense of best practice. The greatest challenge will come in selling new practices to those wedded to old ways. By aligning performance metrics with best practices, the KM core team should be in a position to demonstrate the “value” to process stakeholders in embracing proposed changes and new standards. BUILDING THE “KNOWLEDGE LIBRARY” Process templates do not tell the entire story. Knowledge workers will also benefit from best-in-class examples of completed templates, case studies of how to properly employ process tools, illustrations of business-winning proposals, etc. Some of these components or artifacts already exist and need only be appended to the KM platform. However, others may exist only as tacit knowledge. In this case, the enterprise will require some form of incentive that encourages in-house experts to convert their experiences to more sharable knowledge assets. The KM core team will provide support 889

FACILITATING KNOWLEDGE WORK for these individual efforts as well as the linkages between these discrete artifacts and the overall business process/KM framework. Even in the best of all possible worlds, only a fraction of useful tacit knowledge will be transferred to more explicit forms. Few organizations have the time and resources to indulge in oral history projects. Instead, over time the KM core team will develop information technology-enabled exchanges and forums to more easily link information seekers with subject matter experts. In total, all of the process artifacts and experiences serve as a business process knowledge library to be used by less-experienced personnel. In the case of the Solution Selling case study, library components include: • • • • • • • • •

Process templates Market intelligence Selling Solution case studies by customer type and product Project scoping tools and illustrations Proposal library Selling Solution process discussion forum A directory of Solution Selling experts (e-mail enabled) Internet training courses Related online publications

As with the other elements of the mapping effort, the knowledge library component provides considerable input for the design and development of an enterprisewide KM platform. Notes 1. In a mature service economy, such as that of the United States, the enterprise’s deliverables to the customer are more often than not a set of services rather than tangible goods. 2. The term “taxonomy” refers to a structured language employed by knowledge managers to organize, tag, and retrieve knowledge components within a KM repository. Think of KM taxonomy as the logical roadmap for organizing the enterprise’s knowledge assets and as an index for finding and retrieving particular assets with the aid of some automated environment, such as an enterprise Intranet site.

ACKNOWLEDGMENTS The author acknowledges the insights and support of his colleagues as he prepared this essay, especially Tor Stenwall of New England Financial, Jerry Kanter and Jill Stoff of Babson College, and the knowledge management team from KPMG, Boston. However, the author takes full responsibility for any errors found herein.

890

Index

This page intentionally left blank

Index NOTE: Italicized pages refer to tables and to illustrations 128-bit encryption standard, 384 2.5 technology, 251–252 2G technology, 252 32750/5250-style transactions, 270 360 Degree Benchmarking, 29 3Com, 209 3G technology, 251–253 4G technology, 253 4GLs (fourth-generation languages), 782 5G technology, 253 802.11 protocols, 242, 252, 380, 694 802.11a standard, 380 802.11b standard, 380 802.11g standard, 380 802.1Q standard, 233 808.1d Spanning-Tree standard, 233

A ABC Bucherdienst, 627 Accenture, 848 acceptable use policies, 761–763; see also Internet activation of, 767–768 committee, 768–769 e-mail, 763–764 netiquette addendum, 766 newsgroups, 764–767 scope and overview, 763 World Wide Web resources, 764–767 access charges, 196, 202 access control, 354 against unauthorized users, 777 Web applications, 732–733 accessibility IQ of information, 681 accompany.com, 626 account administration, 744 ACM (Association for Computing Machinery), 102, 108, 594 ACM digital libraries, 677

acquisition, 75 in IT procurement process, 79 key issues in, 82 subprocesses, 82 action plans, 757–758 active vulnerability assessment tools, 396 ActiveX technology, 535 activities, 363 activity diagrams, 489, 490 ActivMedia Research, 651 actors, in use case modeling, 501, 506–507 adaptation in software agents, 441 Adaptive Software Development, 515 ADCT (Analyze, Design, Code, Test), 483 administration tools, 734 Advance Research Project Agency, 450 advisors, in information systems audits, 346 advocates, in information systems audits, 346 aerospace industry, 654 AESOPS (Agent-Based Sales Order Processing System), 451 affinity group filtering, 868 Agent-Based Sales Order Processing System (AESOPS), 451 Agent Communication Language, 435 agent technology, 115–116 Agile Manifesto for Software Development, 515 Aglets, 443 agreements outsourcing, 145–149 confidentiality, integrity and availability, 147 continuity and recovery of operations, 146 cost and billing verification, 146 network controls, 148 personnel, 148

893

IS Management Handbook program change control and testing, 147 retention of audit rights, 145–146 security administration, 147 strategic planning, 148–149 vendor controls, 147 vendor stability, 148 agreements for outsourcing, 145–149 agreements, 145–149 confidentiality, integrity and availability, 147 continuity and recovery of operations, 146 vendor stability, 148 confidentiality, integrity and availability, 147 continuity and recovery of operations, 146 cost and billing verification, 146 network controls, 148 personnel, 148 program change control and testing, 147 retention of audit rights, 145–146 security administration, 147 strategic planning, 148–149 vendor controls, 147 vendor stability, 148 AICPA (American Institute of Certified Public Accountants), 595 Air Products and Chemicals Inc., 821–822, 851–852 AITP (Association of Information Technology Professionals), 779 Alando, 627 ALI (American Law Institute), 596 alignment categories levels of, 9–14, 15–16 rating system for, 16–19 alternative meeting methods, 828–831 conference meetings, 828–829 e-mail, 830 electronic meeting systems, 831 real-time data conferencing, 831 videoconferencing, 829–830 Web meetings, 830–831 Amazon.com, 625 acquisition of ABC Bucherdienst, 627 cyberattacks against, 719 data collection practices, 718 service failures, 331 ambiguity in leadership, 120 America Online, 207 American Institute of Certified Public Accountants (AICPA), 595

894

American Law Institute (ALI), 596 American Society for Industrial Security, 147 Ameritech, 202, 207 AMP, 451 AMPS analog system, 381 Andersen Consulting, 848 anecdotal accounts, 93 animation tools, 864 anomaly-based IDSs (intrusion detection systems), 393–394 anthropomorphism, 440 anti-lock brakes, 512 anti-trust regulations, 198, 655 antibodies, 112 ANX (Automotive Network Exchange), 114 AOL TimeWarner, 207 Apple Computer Corp., 475 application-based IDSs (intrusion detection systems), 392–393 Application Center 2000, 694 application logic layer, 218, 220 application service providers, see ASPs applications, 782 consistency of, 477 developers of, 735 leads, 63 publishing of, 666 users of, 729 architecture design, 129, 182–183 architectures, 182–183 Ariba, 639, 641 Armey, Richard, 708 Armour, F., 507 Art of Possibility (book), 120 Art of War, The (book), 120 artificial intelligence, 436, 443 ASP.NET, 418 ASPs (application service providers), 159 future of, 166 information technology (IT) management issues, 162–166 role in 21st century organizations, 159–162 strengths in virtual corporations, 161 assessment of organizations, 16–19 steps in, 18–19 tally sheet for, 17–18 asset management, 75 in IT procurement process, 79 key issues in, 85–86 subprocesses, 85 assets, in BASEline analysis, 42 association discovery, 293, 311–312

Index Association for Computing Machinery (ACM), 102, 108, 594 association, in use case modeling, 501 Association of Information Technology Professionals (AITP), 779 AT&T Corp. acquisition of cable companies, 205 anti-trust suits, 198 average revenue per minute of switched services, 201 breakup of, 198–199 investment of Microsoft in, 211 market share, 199, 200 voluntary divestiture of, 192, 207 ATM networks, 258 ATM QoS, 262 Atomic Energy of Canada Ltd., 590 attack analysis and response, 358 auctions, 625, 641 audio archives, 676 audit and trends analysis, 358 audit rights, in outsourcing agreements, 145–146 audits, 342–343 categories of, 343 in data conversion, 330 of outsourcing vendors, 150 roles of auditors in, 344–346 roles of executives in, 346–348 scenarios in, 341–342 traditional vs. value added approach, 343–344 authentication, 732–733 Autobytel.com, 625 Automation Engine, 677–678 automotive industry, 654 Automotive Network Exchange (ANX), 114 autonomy in software agents, 438 AutoTester Inc., 336 AutoTradeCenter.com, 653 availability, as metrics of service levels, 164 Avendra, 654

B B2B (business-to-business) commerce, 315; see also E-business; E-commerce; purchasing systems business relationships in, 652 common XML terms in, 431 competition in, 625–626 in E-procurement, 638–639 E-procurement software, 639 exchange options, 658

interoperability in, 659 and maturity in industry, 659 options for, 657–659 potential growth of, 651–652 success factors in, 631–634 Web data in, 676 B2C (business-to-consumer) commerce, 625; see also E-business; E-commerce success factors in, 629–631 Web data in, 676 Baan, 38, 483 back end, 464 backup procedures, 779 BAE Systems, 654 balanced scorecard concept, 370–372 bandwidth cost of, 250–251 in telework arrangements, 811 utilization analysis, 224 VPNs (virtual private networks), 265–266 bankruptcies, 192 and data privacy, 697 Barker, Joel, 436 BASEline analysis, 41–42 Bayesian inferencing, 867 BBBOnLine Privacy Seal program, 703, 709 Bea Systems Inc., 696 behavioral diagrams, 487–492 Bell, Alexander Graham, 198 Bell Atlantic, 202, 207 Bell Operating Companies, 198 BellSouth, 202 benchmarking, 29, 363 benefit analysis, 619 Berners-Lee, Tim, 691 best-guess review, 785, 788–789 Better Business Bureau Online (BBBonline), 703, 709 BIA (business impact analysis), 364, 367–368 billing, 146, 631 bimodal scales, 93 biometrics, 732 Birnbaum, J., 214 bit arbitrage, 194 bits, 194 BizTalk Server 2000, 694 Bloomberg L.P., 723 Blueprints for High Availability:Designing Resilient Distributed Systems (book), 374 Bluetooth, 242, 253, 381, 694 Boeing, 654 Booch, G., 492 bots, 116

895

IS Management Handbook bounded rationality, 439 BPDUs (bridge protocol data units), 233 BPI (business process improvement), 363 BPR (business process reengineering), 23 brainstorming, 42 breakout stage (strategic breakout method), 616–617 Bregman, Mark, 214 brick-and-mortar companies, 627 bridge protocol data units (BPDUs), 233 Brinkkemper, S., 494 broadband Internet, 195 brochureware, 605 browsers, 437, 536 BRP (business operations resumption planning), 365 budgets, 139–140 Building the Data Warehouse (book), 281 bureaucracy-bashing, 31 business actors, 507 business analysts, 63 business domain components, 532 business impact analysis (BIA), 364, 367–368 business operations resumption planning (BRP), 365 business processes; see also mapping business processes architects, 63 improvement (BPI), 363 and information technology (IT) capabilities, 24 interdependencies as, 364–365 reengineering (BPR), 23 SLM (service level management) in, 339 business restructurings, 127 business strategies assessment of, 614 in BASEline analysis, 42 and C-SLC (customer-supplier life cycle), 606–608 in E-procurement, 644 in information technology (IT) infrastructures, 181–182 levels of, 604–606 SWOT analysis, 615 business technologists, 133 business-to-business commerce, see B2B (business-to-business) commerce business-to-consumer commerce, see B2C (business-to-consumer) commerce buy.com, 626 buyer software, 639

896

C C-band auction, 210 C programming language, 460 C# programming language, 410–413, 418, 460, 694 C++ programming language, 694 C-SLC (customer-supplier life cycle), 606–608 CA (continuous availability), 365–366, 373–374 Cable & Wireless, 211 cable channels, 115 cable Internet, 195 cable modems, 811 cable television, 205–206 Cable TV Privacy Act (1984), 703 cabling contractors, 224 CAETI (Computer Assisted Education and Training Initiative), 450 calendaring, 113 call centers, 105 Cao, Q., 494 Capability Maturity Model, 7 CAPBAK, 336 capture/playback tools, 336 cardinality of data, 323 Carnegie Mellon University, 7, 448 Category 5 data cables, 224 caveat emptor, 592 caveat venditor, 592 CDMA, 251–252, 381 CDMA2000, 251–252, 381 CDPD, 381 cellular communications, 204–206, 381 cellular phones, 116, 382 census data, 309 centralization, 177–178 centralized authentication, 733 centralized computing, 112 centralized development and support environment, 791 centralized Internet security architecture, 739–740 Certicom, 384 Certificate Authority, 259 certification, 579 change agents, in information systems audits, 345 change management, 130 change readiness assessment, 618–621 chaos, punctuated, 112 character data, 310 Charles Schwab, 331, 625 chassis-based switches, 228

Index ChemB2B.com, 654 ChemConnect, 658 Chemdex, 654 chemical industry, 654 ChemicalDesk, 654 Chevron Corp., 722 chief knowledge officer, 876 Chief Policy Officer (CPO), 108 Children's Online Privacy Act (1998), 697, 703 Christensen, Clayton, 111 Chrysler Corp, 642 Chun Yuan University, 450 Cingular Wireless, 252 CIOs (chief information officers) leadership style, 119–123 and review processes, 29 role in era of dislocation technologies, 111, 117 role in IT planning, 39 as role models, 123–124 Cisco Systems, 209, 625, 656, 693 Citicorp, 175 City University of Hong Kong, 449 CLASS (custom local exchanges services), 202 class diagrams, 485–486 classification, 311 classification models, 293, 310 cleansing of data, 310 CLECs (competitive local exchange carriers), 203 click-and-mortar business, 625; see also E-business opportunities in, 627 vs. retailers, 608 clicks, 625 clickthrough rates, 629 clickwrap license, 596 client capture, 336 client/customer/patient choice, 106 client/server systems, 37–38, 333, 666 clustering, 310, 312–313 CMP (crisis management planning), 365 CNIL (Commission Nationale de l'Informatique et des Libertés), 698 CNT, 272 COBIT (Control Objectives for Business Information Technology), 779 COBOL programming language, 458, 460 Coca-Cola, 446, 654 Cockburn, A., 507 code instrumentation, 335–336

code speed, 587–588 code walkthroughs, 577 coding, 460 COINS (community-of-interest-networks), 114 collaboration diagrams, 489 services, 113 technologies, 820, 864, 868 collective code ownership, 523–524 collectivism, 823 .com, 675 Comcast, 207 Commerce One, 639 Commerce Server 2000, 694 CommerceNet, 446 Commission Nationale de l'Informatique et des Libertés (CNIL), 698 commitment, 32 commitment management process, 58–65 commitment documents, 59 defining problems or opportunities in, 59–61 defining project roles in, 62–63 executive sponsor, 58–59 post-implementation assessment, 64 risk management in, 61–62 value template in, 60 Common Language Runtime, 415 Common Object Request Broker Architecture (CORBA), 496, 535 Common Warehouse Metamodel, 496 communication contextuality, 823 Communications Decency Act of 1996, 722 communications maturity, 8 levels of, 9 rating system for, 17 communities, 838–839 community-of-interest-networks (COINS), 114 company analysis, 614 Compaq Computer Corp., 722–723 comparative analysis, 363 compassion, 106 competency/value measurements maturity, 8 levels of, 10 rating system for, 17 competition, 106 model of, 626 time-based, 26 competitive local exchange carriers (CLECs), 203

897

IS Management Handbook competitors, 717–718 compilers, 458 complex projects, 561–563 added value in, 563–565 coordination dangers in, 568–569 factors in, 567 project development tolerance, 565–567 steps in, 569–571 complex structures, 130–131 compliance team, 709 component diagrams, 486 components, 532 architecture, 534 awareness, 536 based on business requirements, 533 building, 538–540 building applications with, 536–538 cost-effective, 540–543 enabling technologies, 533–534 framework, 533–536 off-the-shelf, 532 overview, 531 reusability, 537, 539–540 Computer Assisted Education and Training Initiative (CAETI), 450 Computer Associates, 336 Computer Professionals for Social Responsibility (CPSR), 709 Computer Security Institute (CSI), 800 Computerworld.com, 514 conference meetings, 828–829 confidential documents, 802–803 confidentiality, 106, 147 configuration analysis and response, 358 consortia exchanges, 654–655 constraints, 492 consumers, concerns on Internet privacy, 718 consumers' report, 141 content knowledge, 875 Content Management Server 2001, 694 content management systems, 864 content mining, 292 contextual IQ of information, 681 continuity planning, see CP continuous availability (CA), 365–365, 373–374 continuous requirement support, 358 contract employees, 90 contract fulfillment, 75 key issues in, 83 subprocesses, 82–83 contractors, 144

898

contracts, 596 control environments audits, 343 Control Objectives for Business Information Technology (COBIT), 779 control totals in data conversion, 329 Convention 108, 698 cookie sharing, 720 cookies, 668, 718 cooperation in software agents, 438 coordination, 132, 568–569 Copyright Act of 1976, 775 copyright infringement, 721, 775–776 copyrights, 679, 718 CORBA (Common Object Request Broker Architecture), 496, 535 corporate espionage, 718 corporate intranets, see intranets corporate strategies, see business strategies correction of data, 325–326 cost analysis, 619 costs, 26 analysis of, 142 direct, 142 indirect, 142 and outsourcing, 136 overhead, 142 verification of, 146 counts, 93 Covad, 192 Covey, Stephen, 120 Covisint, 642, 654 CP (continuity planning), 361 balanced scorecard concept in, 370–372 components of, 366–369 department interdependencies in, 364–365 disaster recovery in, 362–364 framework, 366 process approach to, 365–366 process improvement, 362–365, 367 SLAs (service level agreements) in, 375 value journey, 369–370 for Web-based applications, 372–376 CPO (Chief Policy Officer), 108 CPO (Chief Privacy Officer), 708–709 expanded role of, 724–725 primary responsibility of, 715–716 CPRS (Computer Professionals for Social Responsibility), 709 CPU time, 328 crackers, 718–719 creativity in leadership, 122 credit cards, 312

Index CREs (customer relationship executives), 51 in managing project commitments, 57–58 in service level agreements (SLAs), 53–54 crisis management planning (CMP), 365 critical success factor analysis, 42 CRM (customer relationship management), 125 ASPs (application service providers), 159 business process mapping in, 880–883 data warehousing in, 290 cross-functional process integration, 126 Crystal, 515 CSI (Computer Security Institute), 800 cultural differences and leadership, 121 and outsourcing, 158 and temporal coordination of work, 824 in virtual teams, 823–824 Cunningham, Jon, 441 currency conversions, 310 custom-built solutions, 127 custom local exchanges services (CLASS), 202 custom software, 573 customer-supplier life cycle (C-SLC), 606–608 customers; see also suppliers in customer-supplier life cycle, 606–608 focus, 137 profiling, 313 relationship management, see CRM satisfaction of, 59, 96, 754–756 services, 26, 63 teams, 526 tests, 521 cybercrime, 718–719 cyberspace, 349–350; see also Internet dynamic prevention and protection in, 357–360 human threats in, 350–351 security issues, 351 cycle times, 78, 114

D D'Agents, 443 DaimlerChrysler, 654 Daley, William, 703 data, 836 abnormalities, 321–325 collection methods, 93–94, 98 correction, 325–326 encryption, 354 jacks, 224 management, 63, 177

quality, 321–325 data access layer, 218, 220 data centers and information technology (IT) infrastructures, 177 measures for, 29 physical security of, 147 data communications, 811 data conversion, 315 cost and benefits of, 316 data quality, 321–325 data warehousing, 329 design of, 327–329 error correction process, 325–326 mapping in, 326–327 recovery from error in, 329–330 steps in, 316–321 data conversion team (DCT), 315 data entry, 327 data exchange, 429 data flow diagram (DFD), 459 data marts, 301–305 data mining, 307 applications of, 313–314 data warehousing in, 291–292 need for, 308–309 vs. other analysis methods, 308 process of, 309–311 tasks, 292 techniques, 293, 311–313 tools, 294–295 Web-based, 292–293 in Web integration, 674 Data + Model Decision Support System, 449 data privacy costs of, 708 cultural-political perceptions on, 697 Data Protection Directive (European Commission) requirements, 706–708 data protection principles, 699–702 in the European community, 698–699 issues in, 706–708 legislations on, 697 Safe Harbor principles, 704–706 in the United States, 702–704 Data Protection Act (United Kingdom), 700–702 Data Protection Directive (European Union), 698–700 compliance in the United States, 703–704 issues in, 706–708 recommendations for U.S. companies, 708–712

899

IS Management Handbook data protection principles, 699–700 data security administration, 745 data storage layer, 219, 220 data subjects, 699 data visualization, 295 data warehousing, 671–674 basics of, 281–283 corporate spending on, 279 costs of, 677 in data conversion, 329 data marts, 301–305 in data mining, 291–292 and decision support systems, 287–291 design and construction of, 284–287 history of, 279–281 intelligent technology in, 437 managerial and organizational impacts of, 296–299 object-oriented, 679 ROI (return on investment), 672 software agents in, 444 structured data in, 859 Web integration, 674–676 data webhouses, see Web-based data warehousing database management systems (DBMS), 280 databases, 309 DataTAC/Ardis, 252 Davenport, T., 836 DB2 Universal Database, 304 DBMS (database management systems), 280 DCOM (Distributed Component Object Model), 535 DCT (data conversion team), 315 debuggers, 458 decentralization, 177–178 decentralized authentication, 733 decentralized Internet security architecture, 739–740 decision support systems, 287–291 multidimensional OLAP (online analytical processing), 289 relational OLAP (online analytical processing), 290 virtual data warehouse, 287–289 Web-based data warehouses, 290, 290–291, 685–687 decision trees, 294, 295 Declaration of the Rights of Man and of the Citizen, The, 698 defamation, 722, 766 delivery management, 874 Dell Computer, 656 demographic databases, 309

900

denial-of-service (DoS) attacks, 234 Department of Commerce, 707–708 Department of Justice (DOJ), 198 deployment processes, 74–75 acquisition process, 82 contract fulfillment process, 82–83 diagrams, 486–487 key issues in, 81 subprocesses, 81 DePree, Max, 120 DES (Digital Encryption Standard), 259 Descarte's rule of change, 105 design reviews, 576–577 desktop computing, 112 desktop workstations, 163, 225 Deutsche Telekom, 209 DevelopMentor Inc., 693 DFD (data flow diagram), 459 DHL, 677–678 diagnose stage (strategic breakout method), 612–616 dial-up connection, 195 Differentiated Services (Diff-Serv), 263 digital bits, 194 digital cameras, 381 digital certificates, 732 Digital Encryption Standard (DES), 259 digital hashing, 386 digital signatures, 386 digital subscriber lines (DSL), 195, 205–206 digitization, 194 direct costs, 142 direct risk mitigation, 354–356 directories, 663–664 directory services, 734 disaster recovery, 362–364 disaster recovery planning (DRP), 365 disclosure, 106 DISCO (Discovery of Web Services), 693 Discovery of Web Services (DISCO), 693 Discovery (Space Shuttle), 594 discretionary costs, 49 discrimination, 722–723 dislocating technologies, 111–112 agent technology, 115–116 evolutionary nature of, 112 Internet and Internet technologies, 113–115 network computing, 112–113 pervasive computing, 115–116 Disney Corp., 697 distance education, 450 distributed computing, 112 components, 533–535

Index Web applications in, 665–667 distributed environment, 790–791 distribution management, 874 Diversified Graphics Ltd., 595 divestitures, 127 Dobing, B., 493 document management, 113 document management systems, 864 Document Object Model (DOM), 430–431 Document Type Definition (DTD), 430 documentation, 775, 778 Dollar Rent-a-Car, 695 DOM (Document Object Model), 430–431 domain analysis, 540 domain integrity of data, 322–323 domains, 675 DoS (denial-of-service) attacks, 234 dot.coms, 33; see also E-business; E-commerce competition, 625 start-ups and failures, 626 types of, 625–626 DoubleClick.com, 625 Dow Chemical, 658 Dow Corning, 175 downsizing, 799 confidential documents in, 802–803 e-mail in, 801–802 security awareness in, 802–803 security breaches in, 799–800 downtime, 331 Driver's Privacy Protection Act (1994), 703 DRP (disaster recovery planning), 365 Drucker, Peter, 111, 676 DSDM, 515 DSL (digital subscriber lines), 195, 205–206 DTD (Document Type Definition), 430 dumb terminals, 38 Dun & Bradstreet, 704 Dutch auction rules, 641 dynamic IP address, 225

E E-business, 214–215; see also strategic breakout method business-to-business, see B2B (businessto-business) commerce business-to-consumer, see B2C (businessto-consumer) commerce C-SLC (customer-supplier life cycle), 606–608 click-and-mortar opportunities in, 627

company analysis, 614 competitive landscape in, 625–628 data warehousing in, 290 evolution of, 629 industry analysis, 613 levels of, 604 overview, 603 players in, 625 processes, 24 SLM (service level management) in, 331–332 strategic alliances in, 627–628 strategies, 604–606 in supply chain management, 637 SWOT analysis, 615 transaction cost savings in, 637 E-business XML (ebXML), 431–432 E-catalogs, 640–641 E-Chemicals.com, 654 E-commerce, 445–446; see also E-business definition of, 603 in foreign markets, 627 E-hubs, 651, 656 E-mail, 113 acceptable Internet use policies, 763–764 as alternative to face-to-face meetings, 830 in B2C (business-to-consumer) commerce, 630 in downsizing, 801 and employee productivity, 724 filtering, 443 as a high-tech security measure, 801–802 management packages, 630 in outsourcing, 157 routing, 443 software agents in, 443–444 spam, 718 in virtual teams, 820 E-paternalism, 707 E-procurement, 637–638; see also electronic marketplaces; purchasing systems business implementation issues in, 645–649 federal government process in, 648 life cycle, 646–647 for MRO (maintenance, repair and operating supplies) goods, 647–649 success factors in, 644–645 technical issues in, 649–650 E/R (entity/relationship) diagrams, 459

901

IS Management Handbook E-services, 220 e-steel.com, 653 EAP (Extensible Authentication Protocol), 234 eBay.com, 331, 625, 719 ebXML (E-business XML), 431–432 EDI (electronic data interchange), 315; see also E-business in E-business, 605 human-machine interaction in, 692 in private exchanges, 656 XML-based, 429 .edu, 675 EEA (European Economic Area), 701 effectiveness metrics, 78 efficiency metric, 78 EFT (electronic fund transfer), 605 egg groups, 31 eight-second rule, 629 EIP (enterprise information portal) software, 863, 864 Electric Power Research Institute, 446 electronic commerce, see E-commerce Electronic Communications Privacy Act (1986), 703 electronic data interchange, see EDI electronic fund transfer (EFT), 605 Electronic Funds Transfer Act, 703 electronic marketplaces, 641–642; see also E-procurement business-to-business relationships in, 652 factors affecting survival of, 631–634 types of, 652–657 electronic meeting systems, 831 electronic publishing, 428 Elemica, 658 embedded sensors, 214 emerging technologies, audits of, 343 employease.com, 653 employees directories, 663–664 invasion of privacy of, 723 loss of productivity, 724 performance of, 809 in software industry, 591 turnover rates, 90 employer liability, 721 empowerment, 30–31 and leadership, 122 enablers, 363 encryption, 259 end-user computing, 37, 754 end-user support, 753, 786–787 Endeavor (Space Shuttle), 594

902

engineering management, 558 Enterprise Distributed Computing standard, 496 enterprise information portal (EIP) software, 864 Enterprise Integration Technologies, 446 Enterprise JavaBeans, 494, 496, 536 enterprise network, 223 budget for, 227–228 infrastructure analysis, 223–226 planning, 226 RFP (Request for Proposal), 235–238 security considerations, 233–235 vendor management, 228–232 vendor selection, 238–240 entity/relationship (E/R) diagrams, 459 entrepreneurs, 111 environments, in BASEline analysis, 42 Epson America Inc., 721 equilibrium, punctuated, 112 equity, 106 Ernst and Whinney, 595 Ernst and Young, 724, 848 ERP (enterprise resource planning) systems, 38 and ASPs (application service providers), 159 in B2B (business-to-business) commerce, 633 in information technology (IT) organizations, 125 software agents in, 451 in supply chain integration, 184 error detection and testing, 593 error handling, 329 error reporters, 458 escalation procedures, 55 ESCON directors, 272 estimation, 313 Ethernet, 275 ethics in information technology, 101–102 implications for responsibility, 594–595 and leadership, 121–122 principles, 105–106 principles of, 104 ethics program, 102–103 adopting code of ethics, 108 establishing committee for, 107–108 establishing sub-committee on ethics, 109 ethical analysis in, 108–109 implementing, 104 organizing, 103–104 reporting mechanism, 108

Index review process, 109 visibility of, 108 ETL (extract, transform, load) process, 315 E*Trade.com, 591, 625 Etzioni, Oren, 445 European Data Privacy Commissioners, 704 European Economic Area (EEA), 701 European Union, data protection laws in, 699 evidentiary guidance, 105 Exchange Server 2000, 694 executive sponsor, 63, 876 Exostar, 654 Expedia.com, 695 expert systems, 436 experts, in information systems audits, 345 explicit knowledge, 873 construct, 504, 506 Extensible Authentication Protocol (EAP), 234 Extensible Hypertext Markup Language (XHTML), 426, 430 Extensible Markup Language (XML), 163–164, 406–407 Extensible Stylesheet Language (XSL), 411, 428, 430 external audit, 150 external threats, in cyberspace, 350–351 extranets, 437 eXtreme Modeling, 496 Extreme Programming, see XP (Extreme Programming) Extreme Programming, Embrace Change (book), 514

F face-to-face management, 809–810; see also meeting methods, alternative; telework default meeting methods, 832 objectives of, 827–828 optimizing, 831 Fair Credit Reporting Act (1970), 703 Family Education Rights and Privacy Act (1974), 703 FAST (fast analysis solution technique), 363 fat clients, 666 FBI (Federal Bureau of Investigation), 719 FCC (Federal Communications Commission), 197, 210 Feature Driven Development, 515 Federal Bureau of Investigation (FBI), 719

Federal Communications Commission (FCC), 197, 210 Federal Express, 182 Federal Register, 704 Federal Trade Commission (FTC), 697–698 federalist organizational model, 180 feedback, 66 Fibre Channel technology, 272–275 financial audits, 343 Financial Information Exchange Markup Language (FIXML), 431–432 financial management, 875 Financial Products Markup Language (FpML), 431 Firefly (Web agent), 442 FirstSense Software, 336 fixed configuration switches, 228 fixed costs, 136 FIXML (Financial Information Exchange Markup Language), 431–432 flowcharters, 458 focal points, 773 Forbes Inc., 719 Ford Motor Co., 642, 654 formal audit, 785, 788 Fortran programming language, 458, 460 fourth-generation languages (4GLs), 782 FpML (Financial Products Markup Language), 431 frame relay networks, 258 frame-tagging, 233 France, data protection laws in, 698 fraud detection, 313 Freemarkets.com, 625, 632, 642 Frito-Lay, 176 front end, 464 Frontier, 199, 210 FTC (Federal Trade Commission), 697–698 fulfillment services, 626, 631 full disclosure, 106 function assignment, 469–470 functionality of Web sites, 629–630 functionality tests, 746 fundamental analysis agent, 448 Future Edge (book), 436 future station solution, 363

G Galactic Software, 721 Gantt chart, 66 Gartner Group, 773 Gates, Bill, 720

903

IS Management Handbook general control audits, 343 General Motors Corp., 451, 642, 654 general packet radio services (GPRS), 381 generality in methodologies, 460–462 Georgia State University, 449 geostationary satellites, 381 Giga Information Group, 514 gigabytes, 303 global Internet, 115 global positioning systems, 214 globalization of processes, 126 GNS (growth need strength), 32 Gold Rush, 625 Golden rule, 105 Goldman Sachs & Co., 651 GOPHER, 209 Gould, Stephen Jay, 111 governance maturity, 8, 15 rating system for, 17 GPRS, 251–252 GPRS/EDGE, 251–252 GPRS (general packet radio services), 381 Gramm-Leach-Bliley Financial Modernization Act (1999), 703, 715 graphic design tools, 864 graphical user interfaces (GUIs), 440 graphics packages, 782 Grove, Andy, 115 growth need strength (GNS), 32 GSM, 381 GTE, 207 GUIs (graphical user interfaces), 440

H Habermas, Jurgen, 836 hacking, 718–719 cases, 800–801 signatures, 234–235 handheld devices, 214 harassment, 722–723 HDTV (high-definition television), 210 Health Informatics Research Group, 451 Health Insurance Portability and Accountability Act, 716 Health Level 7 (HL7) Committee, 432 help agents, 438 help desk, 65 help-desk technology, 333 Heron Labs, 450 Hewlett-Packard, 336, 704 Hewlett-Packard Medical Group, 593 hi-fidelity systems, 466 high-definition television (HDTV), 210

904

HOL (high-order language), 458 holes, 776 Home Depot, 627 honesty, 106 honey pots, 395 Hong Kong, E-commerce initiatives in, 627 horizontal marketplaces, 653 Hospital Privacy Protection Act, 703 host-based IDSs (intrusion detection systems), 392, 398 Host Integration Server 2000, 694 hostile work environment, 722–723 hotel industry, 654 HTML editors, 864 HTML files, 679 HTML (Hypertext Markup Language), 426 hubs, 224 human resources, 89 development of, 130 knowledge management in, 875 measures for, 29, 90, 92–94 performance metrics, 89–92 Web self-services in, 664 Hyatt Hotels, 654 Hypertext Markup Language (HTML), 426

I IBM Corp., 591, 694, 696 IBM DB2 Universal Database, 304 IDEA encryption, 259 identification and authentication (I & A), 354 IDSs (intrusion detection systems), 389 automatic response to attacks, 394 basis for acquiring, 389–390 complementary tools, 395–396 deployment of, 397–398 event analysis approaches, 393–394 against hacking, 234–235 limitations of, 396–397 types of, 390–393 IFAC (International Federation of Accountants), 779 ILECs (incumbent local exchange carriers), 202–203 ILOVEYOU virus, 719 iMode, 380–381 impartiality, 106 construct, 504, 506 incompatible systems, 773–774 inconsistency of data, 323 incumbent local exchange carriers (ILECs), 202–203 India, outsourcing to, 154–155

Index indirect costs, 142 individualism, 823 industry analysis, 613 industry associations, 114 information, 363 quality of, 681–682 information age organizations, 858 information appliances, 214 information brokers, 625 information highway, 115, 440 information infrastructure, components of, 163–164 information retrieval, 445, 859 information services, 47–48 commitment management process, 58–65 expenditures, 49 internal economy for, 48–50 investing in, 48–50 managing project commitments in, 56–58 managing service delivery in, 50–52 metrics and reporting tools, 65–68 role of project office in, 68–69 service level agreements (SLAs), 50, 52–56 information systems, 214; see also audits; outsourcing pervasive, 216–221 Information Systems Audit and Control Association (ISACA), 779 information technology (IT) capabilities, 21–24 as enabler of process orientation, 125 ethical issues in, 101–102 infrastructures, see infrastructure leaders, 26 management, 38–39 information technology (IT)-business alignment, 7 alignment categories, 7–15 assessing organizations in, 16–19 BASEline analysis of, 41–42 communications maturity, 9 competency/value measurements maturity, 10 partnership maturity, 11, 14 skills maturity, 13 technology scope maturity, 12 information technology (IT) investments, 48–50 business processes as targets of, 23 focus of, 24 vs. information technology (IT) capabilities, 21–23 measures for, 29

option value of, 24 portfolio, 42 information technology (IT) organizations, 25; see also process-based IT organizations alignment and focus in, 27 behaviors, 27–29 characteristics of, 33–34 commitment and productivity, 32 drivers of change, 26 empowerment of people in, 30–31 human resources, 89–91 meaningful work in, 32 review processes, 29 strategic objectives, 27–29 success factors in workforce transformation, 33 trustworthiness of people, 32 turbulence in, 25–26 values, 27–29 vision, 27–29 work climate and culture in, 31–32 information technology (IT) planning, 37 composition of team in, 44 dialectical approach to, 39–40 for line management, 41 planning cycle in, 45 planning horizon in, 44 procedures, 43–45 process and deliverables in, 44 role of line managers in, 40–43 information technology (IT) procurement, 73; see also E-procurement deployment processes, 74–75 key issues in, 76–77 management agenda in, 77–79 managing service delivery in, 75–76 information technology (IT) professionals, 25 commitment and productivity of, 32 ethics, 102 market for, 33 meaningful work, 32 trustworthiness of, 32 Infosecurity News, 772 Infosys Technologies Ltd., 451 infrared communications, 381 infrastructure lead, 63 infrastructure services, 128, 183–184 infrastructures, 176–177 in B2B (business-to-business) commerce, 633–634 in B2C (business-to-consumer) commerce, 631

905

IS Management Handbook and changing organizational structures, 177–180 corporate investments in, 175–176 design and implementation of, 181–185 implementing and sustaining, 185–187 in Internet security architecture, 734–736, 744 pyramid of, 181 rebuilding, 187–188 role in traditional firms, 177 inhibitors, 98 initial public offerings (IPOs), 626 initiate stage (strategic breakout method), 610–611 Inmon, Bill, 281, 671 innovation in leadership, 122 Innovators Dilemma, The (book), 111 insourced solutions, 127 Institute for Electronics and Electrical Engineers, 102, 108 insurance companies, 313 Integrated Services Digital Network (ISDN), 811 Intel, 209 intellectual assets, 128, 718 intelligence in software agents, 438 intelligent highway systems, 116 intelligent keys, 324 intelligent pulling, 677 interdependencies, 364–365 interface agents, 441 internal audit, 150 internal combustion engine, 112 internal consultants, in information systems audits, 344–345 internal service organizations, 137–138 internal threats, in cyberspace, 350 International Association of Corporate Privacy Officers, 715 International Federation of Accountants (IFAC), 779 International Network Services, 335 Internet, 38; see also acceptable use policies access control, 732–733 data, see Web data and data warehousing, 673, 676 as a dislocating technology, 111, 113–115 growth of, 192, 208, 715 mobile, see mobile Internet number of users, 715 privacy standards, 715–716 risks of use in business loss of employee productivity, 724 obscenity, 722–723

906

pornography, 722–723 technical failure, 721 violations of securities laws, 723 violations of trademark and trade secret laws, 723 software agents in, 437, 444 speed and user service, 333 in U.S. households, 195 wireless vs. fixed-line access, 242–244 Internet exchanges, 651 Internet Explorer, 437 Internet Protocol (IP), 195, 251 Internet Relay Chat (IRC), 766 Internet research, acceptable use policies for, 764–767 Internet risks, 717–724 business partners, 720 competitors, 717–718 consumers, 718 copyright infringement, 721 criminals, 718–719 defamation, 722 discrimination, 722–723 government, 719–720 harassment, 722–723 hostile work environment, 722–723 intruders, 720 invasion of employee privacy, 723 libel, 722 Internet Security and Acceleration Server 2000, 694 Internet security architecture, 731; see also security authentication, 732–733 design, 742–745 product strategy and selection, 739–742 project planning and initiation, 734–736 requirements, 736–739 testing, 745–746 Internet service providers (ISPs), 196 Internet Softbot, 445 Internet telephony, 194–196, 209–210 Internet use policy, 725–726 interoperability in data conversion, 328–329 Intershop Research, 445 interstate highway system, 112 intranets, 38 agent technology in, 447 in KMS (knowledge management systems), 863, 864 reasons for creating, 115 in research and development, 605 roles of, 663–665 intrinsic IQ of information, 681

Index intrusion detection systems, see IDSs (intrusion detection systems) inventory, 785, 788 inventory management systems, 649, 664 invoices, 146 IP addresses, 225, 718 IP (Internet Protocol), 195, 251 IP networks, 258 IP QoS, 263 iPAQs, 382 IPOs (initial public offerings), 626 IRC (Internet Relay Chat), 766 Ireland, E-commerce initiatives in, 627 ISACA (Information Systems Audit and Control Association), 779 ISDN (Integrated Services Digital Network), 811 ISO stack, 382 ISPs (Internet service providers), 195 IT Procurement Process Framework, deployment processes, 74–76 iteration planning, 520–521

J J2EE (Java Platform Enterprise Edition), 415–416 as a component framework product, 536 vs. .NET framework, 417–422 Jacobson, I., 492, 499 JAD (Joint Application Development), 483, 550–551 Japan, 823 Java, 116 Java applets, 220, 443 Java chip, 116 Java programming language, 417–419 Java virtual machine (JVM), 415, 418, 536 JC Penney, 182 J.D. Edwards, 483 jelly beans, 115 Jini, 116 Johnson & Johnson, 175, 178–179 Joint Application Development (JAD), 483, 550–551 joysticks, 312

K Kana (e-mail management system), 631 KANBAN (inventory management system), 649 Kant, Immanuel, 836 Kant's categorical imperative, 105

Kelly, Kevin, 115 killer applications, 208 Kinney, Jim, 121, 849–850 Klosek, Jacqueline, 700 Kmart, 627 KMS (knowledge management systems), 858 repository, 860–863 affinity group filtering, 868 collaboration and messaging, 868 gateways, 869 knowledge directories, 868 knowledge mapping, 866–867 management tools, 869–870 multimedia search and retrieval, 866 personalization, 867 standing queries, 867–868 text search and retrieval, 865–866 user interface design, 864–865 technologies, 863 knowledge, 835–837 directories, 868 library, 877 mapping, 866–867 workers, 160, 876 knowledge-based systems, 436 knowledge management, 837–838; see also mapping business processes benefits of, 874 best practices, 849–853 business case for, 846–849 in business processes, 874–875 in changing institutional behaviors, 839–840 conceptual vs. real, 840–841 context for, 846 corporate practices in, 843–844, 849–853 definition of, 844–845 as framework for communities, 838–839 getting started with, 853–854 by industry sector, 848 information components of, 873–874, 875 vs. information retrieval, 859 models, 860–863 players in, 876 technologies, 864 known risks, 546, 547 Kobryn, C., 492 Koehler, W., 678 Korea, 823 Kraft Foods North America, 849–850

L laissez-faire management, 31

907

IS Management Handbook LAN switches, 227–228 LANs (local area networks), 666 Lao Tzu, 120 laptops, 381, 381–382 LCS (large complex systems), 555 business vision, 557 phased-release rollout plan, 558–560 testing and program management, 557–558 lead generation, 874 leaders, 119–123 Leadership Is an Art (book), 120 leadership style, 120–123 LEAP (Lightweight Extensible Authentication Protocol), 234 learning in software agents, 441–443 learning organizations, 26, 840 legalism, 105 less-than-truckload (LTL) carriers, 619 Letizia (Web agent), 442 Level 3, 199 Lex Vehicle Leasing, offshore outsourcing by, 153–157 liability insurance, 725 libel, 722 Liberty Alliance Project, 693 Library of Congress, 677 license procurement, 741–742 licenses, 596 life balance, 123 life-critical applications, 594 Lightweight Extensible Authentication Protocol (LEAP), 234 Likert-type scales, 93 limited liability, 596 Lincoln Financial Group, 850 Lincoln National Reassurance Co., 850 line managers, 40–44 lo-fidelity systems, 466, 470–475 local area networks (LANs), 666 local exchange carriers, 196 entry into long-distance services, 199–204 regulation of, 202–203 locations of work, 808–809 Lockheed Martin, 654 long-distance telephone services, 192 entry of RBOCs (regional Bell operative companies), 203–204 market share, 199 rates, 199 resellers, 199 threat of Internet telephony to, 196 Lotus Notes, 437, 447, 819

908

low earth orbit satellites, 381 LTL (less-than-truckload) carriers, 619

M MAC (Media Access Control) address, 234, 383, 385 machine codes, 458 Maes, Patti, 442, 443 magnetic tapes, 279–280 mail-order vendors, 141 mailing lists, 309 mainframe computers, 304 mainframe processing, 177 major processes, 363 malfunctioning software, 590 malicious codes, 719 management processes asset management process, 84–86 quality management process, 86–87 supplier management process, 83–84 manual data entry, 327 manufacturing, 450–451 Web-based applications, 664 many-to-many business relationships, 652 many-to-many connectivity, 259–260 mapping business processes, 876–877 knowledge library, 889–890 performance metrics, 887–888 process decomposition, 877–883 process rules, 884–887 process templates and tools, 888–889 roles and responsibilities matrix, 883–884, 885–886 mapping, in data conversion, 326–327 MapPoint .NET, 695 market basket analysis, 311, 313 marketing, 605 marketplace knowledge, 875 Marriott, 654 Mars Polar Lander, 594 Marx, Karl, 836 materials management, 131 mature values, 30 McConnell, Steve, 549 MCI-WorldCom, 199, 203, 210–211 McNealy, Scott, 159 MDA (Model Driven Architecture), 496 Media Access Control (MAC), 385 Media Access Control (MAC) address, 234 MediaOne, 207 meeting methods, alternative, 828–831 conference meetings, 828–829 e-mail, 830

Index electronic meeting systems, 831 real-time data conferencing, 831 videoconferencing, 829–830 Web meetings, 830–831 megabytes, 303 Melissa virus, 719 Mellor, S., 496 mental model, 466, 470 mergers and acquisitions, 25 impact on IT organizations, 127 telecommunications industry, 207 messaging services, 177 MetaCrawler, 445 metadata, 429–430 MetaGroup, 677 metaphor, in XP (Extreme Programming), 524 Metcalfe's law, 835 method engineering, 461 methodologies, 459–460 evolution of, 458–459 generality, 460–462 overview, 457–458 problem-focused approach, 462–463 metrics, 29 developing, 132 for human resources, 89–92 for information services, 65–68 for IT procurement process, 77–78 performance, see performance metrics qualitative, 93 quantitative, 93 MFJ (Modified of Final Judgment), 198 microfiche, 676 Microsoft ActiveX, 535 Microsoft Corp., 209 acquisitions of, 211 anti-trust suit against, 720 significant role in telephony, 591 Web services, 693–694 Microsoft DCOM, 535 Microsoft Passport, 695 Microsoft SQL 2000, 304 Microsoft Windows, 440 microwave communications, 381 Middle East Technical University, 450 middle tier, 464 Miller, G., 507 Mills, John Stuart, 698 Milner, Marius, 383 missing data, 324 misuse analysis and response, 358 mitigation in risk management, 547 MMS (multimedia messaging services), 248 mobile agents, 443, 445–446

Mobile Information Server 2001, 694 mobile Internet, 241–242 benefits of, 245–246 limitations of, 244 personal applications, 249–250 recommendations, 253–254 technology trend in, 250–253 uses of, 246–249 wireless vs. fixed-line access, 242–244 Mobitex, 252 Model Driven Architecture (MDA), 496 modeling use cases, 499–500 business-level, 508 constructs, 504–506 core elements of, 501–503 description of, 504 issues in, 506–508 sample application, 500–501 templates in, 504 Modified of Final Judgment (MFJ), 198 modules, 531 moles, 776 monitoring in risk assessment, 553 monopolies, 198 Monthly Operations Report, 65 Morgan Stanley Dean Witter, 722 Mosaic software, 209 Motient, 252 motivating potential scores (MPS), 32 motivators, 98 Motorola, 656–657 MovianCrypt security package, 384 MP3 files, 208 MPLS, 263–264 MPS (motivating potential scores), 32 MRO (maintenance, repair and operating supplies) goods, 637, 645–649 MUDs (Multi-User Domains), 766 multi-tier architecture, 666–667 MultiAgent Decision Support System, 449 multidimensional OLAP (online analytical processing), 289, 308 multimedia data mining, 674 multimedia files, 437 multimedia messaging services (MMS), 248 multimedia search and retrieval, 866 multiple layer testing, 578–579 myths in software development, 581–588

N n-tier architecture, 666–667 Napster, 208 NAS (network-attached storage), 270

909

IS Management Handbook NASDAQ.com, 695 National Association of Fire Equipment Distributors, 721 National Center for Manufacturing Services, 451 National Conference of Commissioners of Uniform State Laws (NCCUSL), 596 National Institute of Standards and Technology, 446, 773 National Research Council of Canada, 450 NCCUSL (National Conference of Commissioners of Uniform State Laws), 596 .net, 675 .NET enterprise servers, 694 .NET framework, 416–417 as a component framework product, 535–536 components, 694–695 development of, 410–412 vs. J2EE (Java Platform Enterprise Edition), 417–422 in organizations, 695 standards, 692–693 technologies used in, 412–413 Web services in, 695 net markets, 641–642 .NET Server, 694 netiquette, 766 NetMeeting, 819 Netscape Navigator, 437 Network Associates, 336 network-attached storage (NAS), 270 network-based IDSs (intrusion detection systems), 391–392, 397–398 network cards, 225 network computing, 112–113 network controls, in outsourcing agreements, 148 network KMS (knowledge management system), 860 network platform services, measures for, 29 network programming language, 116 network security, 234, 358 Network Stumbler (software), 383 Network Systems Corp., 272 network x-ray tools, 336 networked organizations, 159–160 networking consultants, 224 networks, 743 connectivity, 116, 259–260 management, 177

910

neural, 224 sniffers, 336, 384 neural networks, 294 new values, 30 New York Stock Exchange, 114 New York Times Co., 723 Newell, Allen, 438 NewRoads, 626 newsgroups, acceptable use policies for, 764–767 next-generation operating systems, 116 Nielsen, Jakob, 475, 630 "no free lunch" rule, 105 nominal group techniques, 42 nondiscretionary costs, 49 noun phase extraction, 867 NPD Intelect, 591–592 NTT DoCoMo, 693 nuclear missile warning system, malfunction in, 590 numeric data, 310 NYNEX, 207

O OAGIS (Open Application Group's Integration Specification), 431 OASIS (Organization for the Advancement of Structure Information Standards), 431 Object Constraint Language, 495 Object Management Group (OMG), 484, 495, 535 Object Modeling Technique (OMT), 484 object models and diagrams, 486 object orientation, 460–462 vs. agents, 436 in UML (Unified Modeling Language), 484 object-oriented data warehousing (OODW), 679 Object-Oriented Software Engineering (OOSE), 484 object-oriented systems analysis and design (OOSAD), 484 objectivity, 106 obscenity, 722–723 OECD (Organization for Economic Cooperation and Development), 703 off-the-shelf components, 532 Office of the Information Commissioner, 701–702 office suites, 864

Index offshore outsourcing, 153; see also outsourcing addressing cultural differences in, 157 effective communication in, 156–157 future of, 158 managing projects and relationships in, 155–156 recommended actions in, 157–158 virtual teams in, 821–822 Ohio Department of Education, 695 OIC (Office of the Information Commissioner), 701–702 OLAP (online analytical processing), 288–290, 444 OLTP (online transaction processing), 280 OMG (Object Management Group), 484, 495, 535 Omnexus, 658 OMT (Object Modeling Technique), 484 one-to-many business relationships, 652 one-to-many connectivity, 259–260 one-to-one network connectivity, 259–260 online analytical processing (OLAP), 288–290, 444 online auctions, 446, 625, 641 online content providers, 625 online public libraries, 677 online retailers, 625 online trading, 591 online transaction processing (OLTP), 280 OODW (object-oriented data warehousing), 679 OOSAD (object-oriented systems analysis and design), 484 OOSE (Object-Oriented Software Engineering), 484 Open Application Group's Integration Specification (OAGIS), 431 open architecture, 194 open-ended questions, 93 open systems, 439 Open Systems Interconnect (OSI), 273 open workspace, 524–526 openness, 106 operating systems, 116, 443, 458 optionality of data, 323 Oracle Corp., 304, 483, 591 Oracle Database, 304 .org, 675 Organization for Economic Cooperation and Development (OECD), 703 Organization for the Advancement of Structure Information Standards (OASIS), 431

organizational change, 127, 177–180 Organizational Development (book), 27 organizational systems and processes, 184–185 organizations, 363; see also knowledge management in information age, 858 models, 178, 180 originating access, 199 orphaned records, 323 OSI (Open Systems Interconnect), 273 Otis Elevator, 176 Out of Control and New Rules for the New Economy (book), 116 outletzoo.com, 626 outsourcing, 135–136; see also agreements for outsourcing; vendors audit alternatives, 150 challenges in, 139–142 activity-based costing analysis, 142 budgeting by deliverables, 139–140 subsidies, 140–141 of commodity-based services, 129 extended staffing in, 143–144 and internal service organizations, 137–138 motivation in, 137 offshore, 153–158 performance claims and reality in, 136 pricing, 142–143 protective measures, 148–149 overhead costs, 142 Oxygen Project, 215, 220

P P2P (peer-to-peer) transactions, 208 P3P (Platform for Privacy Practices), 711 PABADIS multi-agent system, 451 Pacific Bell, 207 packet switching, 195 padded cell systems, 395 pair programming, 522 PairGain, 723 Palm Pilots, 382 PANs (personal arena networks), 242 paradigm shift, 436 parallel pulling, 677 Parsons, J., 493 partner-providers, 55 partners, network of, 114 partnership maturity, 15 levels of, 11, 14 rating system for, 17

911

IS Management Handbook Pascal programming language, 460 passive vulnerability assessment tools, 396 password, 732 pay-per-view channels, 115 PCS (personal communication services), 204, 210 PDAs (personal digital assistants), 381–382 peer-to-peer (P2P) transactions, 208 Pennsylvania, 695 penny stocks, 626 people development metrics, 96 people metrics, 91 PeopleSoft, 38, 159, 483 performance claims, in outsourcing, 136 performance factors, 8 performance metrics, 77–78; see also metrics best practices, 97–99 in business process mapping, 887–888 case study, 94–97 challenges in, 99–100 human resource alignment, 89–91 people metrics, 91 process metrics, 92 performance of employees, 809 performance tests, 746 permanent virtual circuits (PVCs), 260 person-to-computer mobile Internet applications, 249 person-to-person mobile Internet applications, 247–248 personal arena networks (PANs), 242 personal communication services (PCS), 204, 210 personal digital assistants (PDAs), 381–382 pervasive computing, 115–116, 214–215 pervasive information systems, 216–221; see also information systems in business travel, 217 creating, 219–221 in emergency, 216–217 in logistics management, 217 modeling, 218–219 in workgroup meeting, 216 Peters, Tom, 136 pharmacies, 608 Philip Morris, 849 pilot tests, 746 PKI (public key infrastructure), 386 PL/1 programming language, 460 platform consistency, 477 Platform for Privacy Practices (P3P), 711 point-of-origin environment, 790 police records, 701–702 pornography, 722–723

912

portals, 629, 641 Porter, Michael, 626 portfolio manager agent, 448 Portolano Project, 215, 220 post-implement assessment process, 64 power, and leadership, 122 predictive models, 293 presentation layer, 218 prevention in risk management, 547 price discrimination, 194 price-fixing, 655 PricewaterhouseCooper, 848 Principle-Centered Leadership (book), 120 printers, 225 priorities, control over, 137 Privacy Act (1974), 703 private exchanges, 656–657 private key encryption, 259 problem decomposition, 531 process analysis and design, 129–130 process-based firms, 131 process-based information technology (IT) organizations; see also information technology (IT) organizations core processes, 127–128 design challenges, 130–133 key disciplines, 128–130 organizational imperatives, 126–127 process-based IT organizations core processes, 127–128 design challenges, 130–133 key disciplines, 128–130 organizational imperatives, 126–127 process-focused firms, 131 process knowledge, 875 process metrics, 92 processes, 363 cross-functional integration, 126 globalization, 126 knowledge management in, 874–875 redesign of, 187 Procter & Gamble, 446, 654 productivity, 32 productivity paradox, 21 products, 782 evaluation and testing, 740–741 interoperability, 233 selection, 741 professionalism, 105 program change control, 147 program management, 129 programmable objects, 116 programmer tests, 522

Index programming languages, 328, 460 project director, 63 Project Management Institute, 545 project managers, 32, 57, 63 project office, 68–69 projects complexity of, 561–569 delivery of, 66, 96 life cycle, 546 management, 545–546 roles in, 62–63 scheduling, 611 scope, 611 scorecard, 67 stakeholders, 611 promotion in E-commerce, 630 prototyping, 483, 550 proximity detectors, 214 Prusek, L., 836 PSTN (public switched telephone network), 195, 210 public key cryptography, 386 public keys, 259 public libraries, 677 Public Service Commission, 210 public switched telephone network (PSTN), 195, 210 Public Utilities Commission (PUC), 210 PUC (Public Utilities Commission), 210 pull technology, 677 punch cards, 279 punctuated chaos, 112 punctuated equilibrium, 111 purchasing agents, 446 department, 605 Web-based applications, 664–665 purchasing systems, 638 auctions, 641 E-catalogs, 640–641 integrated frameworks, 642, 643 marketplaces, 641–642 Purdue University, 459 purposiveness in software agents, 439 push technology, 448, 677 PVCs (permanent virtual circuits), 260

Q QoS (Quality-of-Service), 262–263 qualitative metrics, 93 quality assurance (software development), 573–575 certification, 579

code walkthroughs, 577 multiple layer testing, 578–579 preliminary testing, 578 requirements analysis, 575–576 SPR management and verification, 579 quality management, 75–76 key issues in, 87 subprocesses, 86 quality metrics, 78 Quality-of-Service (QoS), 262–263 quantitative metrics, 93 query and reporting tools, 308 Qwest, 199, 207, 211

R RAD (Rapid Application Development), 483 RAID (redundant array of inexpensive disks), 270 Ramackers, G., 492 Rapid Application Development (RAD), 483 Rapid Development (book), 549 Rational Unified Process (RUP), 504 ratios, 93 Raytheon, 654 RBOCs (regional Bell operating companies), 198, 203–204 RDBMS (relational database management systems), 315 real-time conferencing, 819, 831 real-time packet switching, 195 real-time user awareness support, 358 RealNetworks, 718 recovery of operations, 146 recovery procedures, 779 recovery time objective (RTO), 368 redundancy of data, 323 redundant systems, 774 refactoring, 523 referential integrity of data, 322 referral sites, 629 regional Bell operating companies (RBOCs), 198, 203–204 reinsurance, 850 relational database management systems (RDBMS), 315 relational mathematics, 327 relational OLAP (online analytical processing), 290, 444 release planning, 520 releases (software), 586–587 reliability and fallover tests, 746 reliability, as metrics of service levels, 164 remote access services, 225

913

IS Management Handbook report generators, 458 reporting tools, for information services, 65–68 repository KMS (knowledge management system), 860–863; see also KMS (knowledge management systems) affinity group filtering, 868 collaboration and messaging, 868 gateways, 869 knowledge directories, 868 knowledge mapping, 866–867 management tools, 869–870 multimedia search and retrieval, 866 personalization, 867 standing queries, 867–868 text search and retrieval, 865–866 user interface design, 864–865 representational IQ of information, 681 Request for Proposal (RFP), 228, 235–238, 446 Request for Quotations (RFQs), 446 requirements analysis, 575–576 requirements determination, 75 research and development, 605 reseeded environment, 790–791 Resource Reservation Protocol (RSVP), 260, 263 response time, 55 for internal service providers, 137 as metrics of service levels, 164 restructurings, 127 retailers, 179 retrospectives, in XP (Extreme Programming), 525 Retsina (software agent), 448 return on investment (ROI), 8, 672 reusability of components, 537, 539–540 reused solutions, 127 review processes, 29 reward mechanisms, 814–815 RFP (Request for Proposal), 228, 235–238, 446 RFQs (Request for Quotations), 446 Rights To Financial Privacy Act (1978), 703 RISC (Reduced Instruction Set Computer), 304 risk analysis, 619 Internet security architecture, 737–738 risk aversion principle, 105 risk management, 546 common mistakes, 547–553 common project risks, 553 critical risk information in, 554

914

implementing, 546 matrix, 61 review assessment in, 553–554 standard approaches to, 547 risk management review (RMR), 368 risk posture analysis and response, 356–357, 358 RMR (risk management review), 368 robots, 445 Rochester Telephone, 210 Rockwell Automation/Allen-Bradley, 451 ROI (return on investment), 8, 672 role models, 123–124 Rosetta Stone, 427 Rossi, M., 494 routers, 224 routine error handling, 329 Rowland, Larry, 850 Royal Dutch Shell, 846 RSVP (Resource Reservation Protocol), 260, 263 RTO (recovery time objective), 368 rule induction, 295 Rumbaugh, J., 492 RUP (Rational Unified Process), 504

S SA&D (structured analysis and design), 459 SAFARI (intelligent tutoring system), 450 Safe Harbor Principles, 704–706, 708, 711 sales channels, 114 sales cycle management, 874 sales force automation, 664 sales records, 309 SANs (storage area networks), 269–270 accommodating traffic on enterprise data network, 275–277 components of, 274 evolution and technology review, 272–273 Fibre Channel technology, 272–275 rationale for, 270–272 SAP, 38, 159, 483 satellite communications, 381, 811 SBC, 202, 207 SBUs (strategic business units), 130 scales, 93 scheduling, 450 Schneider National, 179–180 scope control, 550 SCRUM, 514, 515 SCSI (Small Computer Systems Interface), 272 Sculley, John, 440

Index SDLC (System Development Life Cycle), 483 search engines, 445 in KMS (knowledge management systems), 864 in text search and retrieval, 865–866 in Web integration, 674, 678 seat time, 105 Secure Sockets Layer (SSL), 386 SecurID, 385–386 Securities Exchange Act of 1934, 723 securities laws, violations of, 723 security, 772 access protocols, 234 administration, 147 department, 734 developing, 353–354 direct risk mitigation in, 354–356 and downsizing, 799–803 dynamic prevention and protection, 357–360 management commitment to, 353 ready-aim-fire approach to, 351–352 risk posture assessment, 356–357 tests, 746 in user computing, 772 VPNs (virtual private networks), 265 self-certification, 704 self-directed teams, in XP (Extreme Programming), 525–526 self-reproducing programs (SRPs), 776 Selic, B., 492 Semantic Web, 691 Senge, Peter, 846 sensors, 214 sequence diagrams, 488 sequence discovery, 293, 312 servers, 163, 225, 742–743 service delivery, 50–52 service-desk technology, 333 service levels, 8 management, see SLM response standards, 54 in service delivery, 50 service packs, 591 service-quality analysis, 757–758 service-quality gap, 755 Service Set Identifier, 388 Service Set Identifiers (SSIDs), 383 serviceability, as metrics of service levels, 164 SERVQUAL assessment tool, 755 SGML (Standard Generalized Markup Language), 426 shared environment, 790

SharePoint Portal Server 20001, 694 shipping services, 626 Shop Floor Agents system, 451 shopping carts, 411 short messaging system (SMS), 248 shrinkwrap license, 596 Siau, K., 494 Siebel, 159 signal-to-noise ratios, 383 Signaling System 7 (SS7) standard, 209 signature-based IDSs (intrusion detection systems), 393 Silicon Valley, 111, 655 SIM (Society for Information Management), 7, 73, 779 Simon, Herbert, 439, 449 Simple Object Access Protocol (SOAP), 407–408, 427, 693 simulated prototyping, 476–480 Singapore, E-commerce initiatives in, 627 SITA (Societe Internationale de Telecommunications Aeronatiques), 114 skills maturity, 15 levels of, 13 rating system for, 18 skillsets, 132–133 SLAs (service level agreements), 52–56 ASPs (application service providers), 164–166 assigning customer relationship executives (CREs), 53–54 components of, 53 in CP (continuity planning), 375 and engineering management, 558 escalation procedures in, 55 management process, 56 in maturity assessment, 16 metrics, 164 performance metrics in, 65, 132 user experience, 334–335 SLC (System Life Cycle), 483 slippery slope, 105 SLM (service level management) activities, 336–337 business process focus in, 339 components of, 337–338 in E-business, 331–332 improving, 338–339 measuring end-to-end response time in, 335–337 from system to application focus, 333–335

915

IS Management Handbook Small Computer Systems Interface (SCSI), 272 smart cards, 732 smart client software, 694 SmartProcurement, 446 SMEs (small- and mid-sized enterprises), 159 SMS (short messaging system), 248 Sniffer Network Analyzer, 336 snowflake schema, in data warehousing, 286–287 SOAP (Simple Object Access Protocol), 407–408, 427, 693 social contacts and support, 810–811, 815–816 Societe Internationale de Telecommunications Aeronatiques (SITA), 114 Society for Information Management (SIM), 7, 73, 779 Softbot, 445 software agents, 435–436 applications of, 443–451 data warehousing, 444 distance education, 450 e-mail, 443–444 electronic commerce, 445–446 financial applications, 447 healthcare, 451 information retrieval, 445 Internet, 444–445 intranets, 447 manufacturing, 450–451 monitoring, 447 networking and telecommunications, 449–450 push technology, 448 attributes of, 437–443 adaptation, 441 autonomy, 438 bounded rationality, 439 cooperation, 438 human interaction and anthropomorphism, 439–441 intelligence, 438 learning, 441–443 mobility, 443 openness, 439 purposiveness, 439 background, 436–437 implementation issues, 452 technical issues, 452–453 software defects, 590–594 software development, 511–512 agile methodologies in, 513–514

916

causes of failure, 512 corporate spending on, 499 ethical responsibility for, 589–590, 594–595 improvements in, 513 legal liability in, 595 life cycle, 575 measures for, 29 myths about, 581–588 quality assurance, 573–579 recommendations, 587–588 trends in, 512–513 Software Engineering Institute, 7 software industry annual revenues, 591 employment, 591 ethical responsibility, 594–595 laws, 596 software licenses, 596 software piracy, 775–776 software problem reports (SPRs), 578–579 Software Publishers Association (SPA), 341, 775–776 Software Research, 336 software testing tools, 593 solution selling, 878–882 solutions, delivery of, 127–128 sourcing and alliances management, 129, 131 Southern New England Telephone, 207 SPA (Software Publishers Association), 341, 775–776 spam, 718 spanning-tree, 233 special projects, audits of, 343 spiders, 445 Spiral method, 483 spot sourcing, 632 SPR (software problem reports), 578–579 spreadsheets, 512, 782 Sprint, 199 SQL Server 2000, 304, 694 SQL (Structured Query Language), 328 SRDF (Symmetrix Remote Data Facility), 276 SRPs (self-reproducing programs), 776 SS7 (Signaling System 7) standard, 209 SSIDs (Service Set Identifiers), 383 SSL (Secure Sockets Layer), 386 Standard Generalized Markup Language (SGML), 426 standardization, in B2B (business-tobusiness) commerce, 633 standing queries, 867–868 Standish Group, 499 STAR organizations, 26

Index and drivers of change, 26 work climate and culture in, 31–32 star schema, in data warehousing, 286 start-up companies, 626 State Public Utility Commission, 197 statechart diagrams, 489, 491 static IP address, 225 statistical analysis, 308 statistical review, 785, 788 Statoil, 175 stereotypes, 492 Stewart, T.A., 857 stock exchanges, 313, 448–449 Stone, Rosamund, 120 storage area networks, see SANs storyboarding, 472–474 strategic breakout method, 608–609; see also E-business breakout stage, 616–617 diagnose stage, 612–616 initiate stage, 610–611 transition stage, 617–622 strategic business units (SBUs), 130 strategic option generator, 42 strategic planning, 148–149 streaming video/audio tools, 864 structural analyzers, 458 structural diagrams, 485–487 structure mining, 292–293 structured analysis and design (SA&D), 459 structured methodologies, 459–460 structured revolution, 458 structured threats, in cyberspace, 350–351 subprocesses, 363 subsidies, 140–141 Sun Microsystems Inc., 693, 696 supermarkets, 608 supervised induction, 293 suppliers; see also customers agents, 446 and consortia exchanges, 655 in customer-supplier life cycle, 606–608 impact of E-procurement on, 646, 655 management, 75, 79, 83–84 supply chains, 114 in B2B (business-to-business) commerce, 633 consortia exchanges in, 655 data warehousing in, 290 integration of, 184 support services, assessing quality of, 754–756 surveys, 94 SVCs (switched virtual circuits), 260

Sweden, E-commerce initiatives in, 627 Swire, Peter P., 703 switched virtual circuits (SVCs), 260 switches, 224, 227–228 SWOT (strength-weakness-opportunitiesthreat) analysis, 612 assessment matrices, 617 dimensions in, 616 E-business assessment form, 615 Sycara, Katia, 448 Symmetrix Remote Data Facility (SRDF), 276 system, 363 system actors, 507 System Development Life Cycle (SDLC), 483 System Life Cycle (SLC), 483 system model, 471–472 system strategy, in BASEline analysis, 42 systems, 782 systems analysis, 531, 775 systems architecture, 778 systems development, 555, 774–775 systems integration, 131 systems lead, 63 systems security, 177

T tacit knowledge, 873 tagged values, 492 tailoring, 137 tape backup systems, 270 tape libraries, 270 task analysis, 468–469 task consistency, 477 tasks, 363 taxonomies, 866–867 TBDFs (transborder data flows), 698–699, 704 TCI Cable, 207 TCP/IP protocol, 194 TDMA, 381 TechCo, 821–822 technical architects, 63 technical support, 664 technology leverage metrics, 96 technology management, 186–187 technology scope maturity, 15 levels of, 12 rating system for, 17 telecommunications, 131 Telecommunications Act of 1934, 198 Telecommunications Act of 1996, 192 entry into local exchange services, 202–203

917

IS Management Handbook entry into long-distance services, 203–204 goals of, 196–197 history of, 197–202 telecommunications industry, 191–193 anti-trust suits, 198 bankruptcies in, 192 fraud detection, 313 future of, 208–209 impact of cable television on, 205–206 impact of wireless services on, 204–206 mergers and acquisitions, 206–208 regulation of, 196–198, 210 restructuring of, 197 technological changes in, 193–196 telecommuters, 807 teleconferencing, 820, 828–829 telephone services, 112 television networks, 115 telework, 807–808; see also face-to-face management management-related obstacles, 809–811 modified reward mechanisms, 814–815 organizational identity in, 815 and social needs, 815–816 task- and resource related obstacles, 808–809 task support, 813–814 technology resources, 811–812 technology support, 812–813 TELRIC (Total Element Long Run Incremental Cost), 210 terabytes, 303 terminating access, 199 test-driven development, 522, 529 test drivers, 458 Têtê-à-têtê, 446 Texas Instruments, 186 text messaging, 381 text search and retrieval, 865–866 The Conference Board, 7 thin clients, 666 third-party logistics, 179, 608 third-party review, 150 three-tier architecture, 666–667 tiers, 666–667 time-based competition, 26 time bombs, 776 time-boxing, 584 time-delay packet switching, 195 TimeWarner, 207 Tivoli, 336 token rings, 225, 275

918

tokens, 732 tools, 782 Total Element Long Run Incremental Cost (TELRIC), 210 Toy R Us, 625, 631 tracking system, 552 trade secrets, 718, 723 trademark infringement, 723 Tradeout.com, 653 transaction brokers, 625 transborder data flows (TBDFs), 698, 704 transformation of data, 310 transition stage (strategic breakout method), 617–622 Transora, 654 Travelers Property & Casualty, 184 Treasury bonds, 449 Triple DES encryption, 259 Trojan horses, 776 trucking, 179 TRUSTe, 703, 709 trustworthiness, 32, 106 tunneling, 384 turnover environment, 790 turnover rates, 90 two-tier architecture, 666–667

U ubiquitous communications, 112 UCC (Uniform Commercial Code), 596 UCITA (Uniform Computer Information Transactions Act), 596 UDDI (Universal Description, Discovery, and Integration), 408–409, 427, 693 UML (Unified Modeling Language), 509 behavioral diagrams, 487–492 Diagram Interchange, 495 emergence of, 484 evaluation of, 493–495 extensibility mechanisms, 492 future of, 495 Infrastructure, 495 model management diagrams, 492 modeling, 485 new uses of, 496 object orientation, 484 in object orientation methodology, 462 structural diagrams, 485–487 Superstructure, 495 use cases in, 500 UN/CEFACT (United Nations Centre for Trade Facilitation), 431

Index unbundled network elements (UNEs), 203, 207 uncertainty avoidance, 823–824 UNEs (unbundled network elements), 203, 207 unfair trade practices, 655 Unicode, 428 Unified Modeling Language, see UML Uniform Commercial Code (UCC), 596 Uniform Computer Information Transactions Act (UCITA), 596 Unilever, 654 unintended behaviors, 96, 98 uniqueness of data, 322 unit tests, 522 United Kingdom, data protection laws in, 700–702 United Nations Centre for Trade Facilitation (UN/CEFACT), 431 Universal Description, Discovery, and Integration (UDDI), 408–409, 427, 693 universal relation, 327 universalism, 105 University of Washington, 445 UNIX operating system, 304 unknown known risks, 546, 547 unknown unknown risks, 546, 547 unlicensed software, 105 unstructured threats, in cyberspace, 350 UPS, 625 US West, 202, 207 usability, 465–466 usability testing, 476 usable systems, 465–466 benefits of, 466 creating, 466–467 design principles, 475–476, 477–478 function assignment, 469–470 hi-fidelity design and testing, 480–481 lo-fidelity to hi-fidelity systems, 470–475 mental model development, 470 simulated prototyping, 476–480 specifications, 467 task analysis, 468–469 user analysis, 468 usage mining, 293 use cases, 501; see also modeling use cases becoming members of, 505 diagrams, 487–488, 503 documentation of, 509 identification and selection of, 508–509 system-level, 509

in UML (Unified Modeling Language), 500 user analysis, 468 user computing, 771 controls, 778–779 environment, 789–791 user computing risks, 771–772 computer viruses, 776–777 copyright violations, 775–776 inaccurate information, 777–778 inadequate support, 773 inadequate training, 773 incompatible systems, 773–774 ineffective implementations, 774–775 inefficient use of resources, 772–773 redundant systems, 774 unauthorized access, 777 unauthorized remote access, 777 weak security, 772 user-controlled audit authority, 150 user-developed applications, 781–782 characteristics of, 782–783 definitions, 782 environment-function matrix, 793 guidelines, 797 misconceptions in, 783–784 preparing for review, 784–785 application criticality, 792 application development controls, 792 application environment, 791–792 level of security, 794 product use and availability, 794–795 review methods, 787–789 review objectives, 785–787 reviewing applications, 796 scope and content of review, 789 user capabilities and development product knowledge, 795 user management of data, 795 user groups, segmenting, 756–757 user satisfaction, 754–756 and action plans, 757–758 as metrics of service levels, 164 Userland Software Inc., 693 utilitarian principle, 105 utilization metrics, 96 UUNET, 211

V V-model testing, 558 value-added resellers, 228 value chain analysis, 42

919

IS Management Handbook value template, 60 values, 27–29 mature vs. new, 30 variable costs, 136 VB.NET programming language, 418–419 vendors, 135–136; see also outsourcing auditing of, 150 control standards, 147 as extensions to internal staff, 143–144 internal reviews by, 150 personnel standards, 148 pricing, 142–143 stability of, 148 Verisign, 694 Verizon, 207 vertical portals (vortals), 653 video archives, 676 Video Privacy Protection Act, 703 video-rental, Web-enabled, 500–501 videoconferencing, 115 as alternative to face-to-face meetings, 829–830 bit arbitrage in, 194 with mobile Internet, 247–248 in outsourcing, 157 in telework arrangements, 812 in virtual teams, 820 violent implementation, 26 virtual catalogs, 640–641 virtual corporations, 160–161 virtual data warehouse, 287–289 virtual domain, 351 virtual networks, 114 virtual organizations, 159 virtual teams, 807 case examples, 821–822 cultural dimension, 823–824 managing, 824–825 overview, 819–820 tasks and technology, 820 viruses, 234, 719, 776–777 Visual Basic, 460–461, 694 visual programming languages, 460 Visual Studio, 536 Visual Studio .NET, 411, 694 VitalSuite client capture tool, 336 voice communications, 811 voice telephony, 194 vortals (vertical portals), 653 VPNs (virtual private networks), 225 in E-business, 605 evaluation technology, 264–267 key services, 257–261 security, 384

920

in telework arrangements, 811 vulnerability analysis and response, 358 vulnerability assessment tools, 396

W Wal-Mart, 179, 627, 656 WAN (wide area network), 276, 666 WAP (Wireless Access Protocol), 244, 251, 386 war driving, 382–383, 385 warning systems, 590 Waterfall method, 483 Waterman, Robert, 136 WCDMA, 251–252 Web advertising services, 625 Web applications, 665; see also Internet security architecture client vs. server processing, 666 factors in developing, 730 history of, 729–731 as model for distributed computing, 665–667 scalability and performance, 667–668 security, 668 shared database vs. individual log-in, 667 state and session management, 668 tiers, 666–667 users, 729 Web authoring tools, 864 Web Automation Toolkit, 677 Web-based applications, 372–376 Web-based data mining, 292–293 Web-based data warehousing, 290–291, 673–674 advantages of, 676 business information, 676–677 challenges, 678–680 costs of, 677 data quality framework in, 682–687 toolkits, 677–678 Web integration in, 674–676 Web browsers, 437, 536 Web data, 676 access to, 679–680 coverage of, 683 decision support capabilities, 685–687 dynamic nature of, 684 formats, 679 quadrant characteristics, 684–685 quality of, 681–682 value to businesses, 680–682 Web integration, 674–676 architecture, 675

Index business information, 676–677 challenges, 678–680 costs of, 677 data quality framework in, 682–687 directories, 675 toolkits, 677–678 Web meetings, 830–831 Web research acceptable use policies for, 764–767 XML in, 429–430 Web self-service, 663–664 Web services, 405–406 base functions in, 412 challenges, 408 copyrighting of, 680 cross-language compatibility of, 411 cross-platform compatibility of, 411 implementing, 695–696 implementing P3P (Platform for Privacy Practices), 711 in .NET framework, 695 sample application, 410–411 security, 412 standards, 692–693 technologies used in, 412–413 Web Services Description Language (WSDL), 408, 427, 693 Web sites in business strategy, 605 functionality of, 629–630 linking to other sites, 105 permanency and consistency of, 680 privacy policies, 703 promotion of, 629 referral sites, 629 Web surfing, and loss of employee productivity, 724 webMethods Inc., 677 WebTV, 211 WEP (Wired Equivalent Privacy), 383, 384–385 Western Electric, 198 Westin, Alan, 715 wide area network (WAN), 276, 666 Williams, 199 Williams-Sonoma, 627 Windows CE operating system, 694 Windows operating system, 437, 591 Windows XP operating system, 591, 694 Wired Equivalent Privacy (WEP), 383, 384–385 Wireless Access Protocol (WAP), 251, 386 Wireless Application Protocol (WAP), 244 wireless communications, 213

wireless Internet, 195; see also Internet data rates, 251 vs. fixed-line Internet, 242–244 technologies in, 243 wireless LAN, 252 defenses, 383–387 vs. fixed-line Internet access, 242 hardware, 382 ISO stack, 382 overview, 379–380 risks, 382–383 standards and protocols, 380–381 in telework arrangements, 811 wireless loop, 210 wireless phone services, 204–206 wireless security, 379; see also security auditing, 386 awareness and simple procedures in, 384 holes in, 386 technical solutions, 384–386 Wireless Security Auditor, 386–387 Wireless Transport Layer Security (WTLS), 386 WLAN (wireless local arena network), 242, 252 word co-occurrence, 867 work environment metrics, 92 work units, 782, 790 workgroups, 216, 782, 790 working clients, 55, 63, 876 workout sessions, 31 workplace surfing, 724 workstation operating system (WOS), 270 World Wide Web, 437 corporate intranets based on, 863 in SLM (service level management), 333 and telecommunications industry, 191 World Wide Web Consortium, 691 WorldCom, 192 worms, 776 WOS (workstation operating system), 270 WSDL (Web Services Description Language), 408, 427, 693 WSJ.com, 625 WTLS (Wireless Transport Layer Security), 386

X Xerox-PARC, 724 XHTML (Extensible Hypertext Markup Language), 426, 430 XMI (XML Metadata Interchange), 496 XML (Extensible Markup Language), 406–407

921

IS Management Handbook and ASPs (application service providers), 163–164 description of, 426 history and context, 426 overview, 425–426 standards, 430–432, 692–693 uses of, 428–430 value of, 426–428 XML Metadata Interchange (XMI), 496 XP (Extreme Programming), 514–516 adaptations, 528–529 coding standard, 524 collective code ownership, 523–524 common practices, 524–526 continuous integration, 523 core practices, 518–524 customer tests, 521 definition of, 516–517 design improvement, 523 evaluating projects for, 526–527 metaphor, 524

922

organization, 517–518 pair programming, 522 planning game, 520–521 simple design, 521–522 small releases, 521 starting, 527–528 sustainable pace, 524 test-driven development, 522 values, 517, 528 whole team, 519–520 XSL (Extensible Stylesheet Language), 411, 428, 430

Y Yahoo!, 718 Yankee auction rules, 641 Year 2000 (Y2K) compliance, 342

Z Zander, Benjamin, 120