Posts Tagged ‘“College”’

IT and Post-Institutional Higher Education: Will We Still Need Brad When He’s 54?

“There are two possible solutions,” Hercule Poirot says to the assembled suspects in Murder on the Orient Express (that’s p. 304 in the Kindle edition, but the 1974 movie starring Albert Finney is way better than the book, and it and the book are both much better than the abominable 2011 PBS version with David Suchet). “I shall put them both before you,” Poirot continues, “…to judge which solution is the right one.”

So it is for the future role, organization, and leadership of higher-education IT. There are two possible solutions. There’s a reasonably straightforward projection how the role of IT in higher education will evolve into the mid-range future, but there’s also a more complicated one. The first assumes institutional continuity and evolutionary change. The second doesn’t.

IT Domains

How does IT serve higher education? Let me count the ways:

  1. Infrastructure for the transfer and storage of pedagogical, bibliographic, research, operational, and administrative information, in close synergy with other physical infrastructure such as plumbing, wiring, buildings, sensors, controls, roads, and vehicles. This includes not only hardware such as processors, storage, networking, and end-user devices, but also basic functionality such as database management and hosting (or virtualizing) servers.
  2. Administrative systems that manage, analyze, and display the information students, faculty, and staff need to manage their own work and that of their departments. This includes identity management, authentication, and other so-called “middleware” through which institutions define their communities.
  3. Pedagogical applications students and faculty need to enable teaching and learning, including tools for data analysis, bibliography, simulation, writing, multimedia, presentations, discussion, and guidance.
  4. Research tools faculty and students need to advance knowledge, including some tools that also serve pedagogy plus a broad array of devices and systems to measure, gather, simulate, manage, share, distill, analyze, display, and otherwise bring data to bear on scholarly questions.
  5. Community services to support interaction and collaboration, including systems for messaging, collaboration, broadcasting, and socialization both within campuses and across their boundaries.

“…A Suit of Wagon Lit Uniform…and a Pass Key…”

The straightforward projection, analogous to Poirot’s simpler solution (an unknown stranger committed the crime, and escaped undetected), stems from projections how institutions themselves might address each of the IT domains as new services and devices become available, especially cloud-based services and consumer-based end-user devices. The core assumptions are that the important loci of decisions are intra-institutional, and that institutions make their own choices to maximize local benefit (or, in the economic terms I mentioned in an earlier post, to maximize their individual utility.)

Most current thinking in this vein goes something like this:

  • We will outsource generic services, platforms, and storage, and perhaps
  • consolidate and standardize support for core applications and
  • leave users on their own insofar as commercial devices such as phones and tablets are concerned, but
  • we must for the foreseeable future continue to have administrative systems securely dedicated and configured for our unique institutional needs, and similarly
  • we must maintain control over our pedagogical applications and research tools since they help distinguish us from the competition.

Evolution based on this thinking entails dramatic shrinkage in data-center facilities, as virtualized servers housed in or provided by commercial or collective entities replace campus-based hosting of major systems. It entails several key administrative and community-service systems being replaced by standard commercial offerings — for example, the replacement of expense-reimbursement systems by commercial products such as Concur, of dedicated payroll systems by commercial services such as ADP, and of campus messaging, calendaring, and even document-management systems by more general services such as Google’s or Microsoft’s. Finally, thinking like this typically drives consolidation and standardization of user support, bringing departmental support entities into alignment if not under the authority of central IT, and standardizing requirements and services to reduce response times and staff costs.

How might higher-education IT evolve if this is how things go? In particular, what effects would it have on IT organization, and leadership?

One clear consequence of such straightforward evolution is a continuing need for central guidance and management across essentially the current array of IT domains. As I tried to suggest in a recent article, the nature of that guidance and management would change, in that control would give way to collaboration and influence. But institutions would retain responsibility for IT functions, and it would remain important for important systems to be managed or procured centrally for the general good. Although the skills required of the “chief information officer” would be different, CIOs would still be necessary, and most cross-institutional efforts would be mediated through them. Many of those cross-institutional efforts would involve coordinated action of various kinds, ranging from similar approaches to vendors through collective procurement to joint development.

We’d still need Brads.

“Say What You Like, Trial by Jury is a Sound System…”

If we think about the future unconventionally (as Poirot does in his second solution — spoiler in the last section below!), a somewhat more radical, extra-institutional projection emerges. What if Accenture, McKinsey, and Bain are right, and IT contributes very little to the distinctiveness of institutions — in which case colleges and universities have no business doing IT idiosyncratically or even individually?

In that case,

  • we will outsource almost all IT infrastructure, applications, services, and support, either to collective enterprises or to commercial providers, and therefore
  • we will not need data centers or staff, including server administrators, programmers, and administrative-systems technical staff, so that
  • the role of institutional IT will be largely to provide only highly tailored support for research and instruction, which means that
  • in most cases means there will be little to be gained from centralizing IT,
  • it will make sense for academic departments to do their own IT, and
  • we can rely on individual business units to negotiate appropriate administrative systems and services, and so
  • the balance will shift from centralized to decentralized IT organization and staffing.

What if we’re right that mobility, broadband, cloud services, and distance learning are maturing to the point where they can transform education, so that we have simultaneous and similarly radical change on the academic front?

Despite changes in technology and economics, and some organizational evolution, higher education remains largely hierarchical. Vertically-organized colleges and universities grant degrees based on curricula largely determined internally, curricula largely comprise courses offered by the institution, institutions hire their own faculty to teach their own courses, and students enroll as degree candidates in a particular institution to take the courses that institution offers and thereby earn degrees. As Jim March used to point out, higher education today (well, okay, twenty years ago, when I worked with him at Stanford) is pretty similar to its origins: groups sitting around on rocks talking about books they’ve read.

It’s never been that simple, of course. Most students take some of their coursework from other institutions, some transfer from one to another, and since the 1960s there have been examples of network-based teaching. But the model has been remarkably robust across time and borders. It depends critically on the metaphor of the “campus”, the idea that students will be in one place for their studies.

Mobility, broadband, and the cloud redefine “campus” in ways that call the entire model into question, and thereby may transform higher education. A series of challenges lies ahead on this path. If we tackle and overcome these challenges, higher education, perhaps even including its role in research, could change in very fundamental ways.

The first challenge, which is already being widely addressed in colleges, universities, and other entities, is distance education: how to deliver instruction and promote learning effectively at a distance. Some efforts to address this challenge involve extrapolating from current models (many community colleges, “laptop colleges”, and for-profit institutions are examples of this), some involve recycling existing materials (Open CourseWare, and to a large extent the Khan Academy), and some involve experimenting with radically different approaches such as game-based simulation. There has already been considerable success with effective distance education, and more seems likely in the near future.

As it becomes feasible to teach and learn at a distance, so that students can be “located” on several “campuses” at once, students will have no reason to take all their coursework from a single institution. A question arises: If coursework comes from different “campuses”, who defines curriculum? Standardizing curriculum, as is already done in some professional graduate programs, is one way to achieve address this problem — that is, we may define curriculum extra-institutionally, “above the campus”. Such standardization requires cross-institutional collaboration, oversight from professional associations or guilds, and/or government regulation. None of this works very well today, in part because such standardization threatens institutional autonomy and distinctiveness. But effective distance teaching and learning may impel change.

As courses relate to curricula without depending on a particular institution, it becomes possible to imagine divorcing the offering of courses from the awarding of degrees. In this radical, no-longer-vertical future, some institutions might simply sell instruction and other learning resources, while others might concentrate on admitting students to candidacy, vetting their choices of and progress through coursework offered by other institutions, and awarding degrees. (Of course, some might try to continue both instructing and certifying.) To manage all this, it will clearly be necessary to gather, hold, and appraise student records in some shared or central fashion.

To the extent this projection is valid, not only does the role of IT within institutions change, but the very role of institutions in higher education changes. It remains important that local support be available to support the IT components of distinctive coursework, and of course to support research, but almost everything else — administrative and community services, infrastructure, general support — becomes either so standardized and/or outsourced as to require no institutional support, or becomes an activity for higher education generally rather than colleges or universities individually. In the extreme case, the typical institution really doesn’t need a central IT organization.

In this scenario, individual colleges and universities don’t need Brads.

“…What Should We Tell the Yugo-Slavian Police?”

Poirot’s second solution to the Ratchett murder (everyone including the butler did it) requires astonishing and improbable synchronicity among a large number of widely dispersed individuals. That’s fine for a mystery novel, but rarely works out in real life.

I therefore don’t suggest that the radical scenario I sketched above will come to pass. As many scholars of higher education have pointed out, colleges and universities are organized and designed to resist change. So long as society entrusts higher education to colleges and universities and other entities like them, we are likely to see evolutionary rather than radical change. So my extreme scenario, perhaps absurd on its face, seeks to only to suggest that we would do well to think well beyond institutional boundaries as we promote IT in higher education and consider its transformative potential.

And more: if we’re serious about the potentially transformative role of mobility, broadband, and the cloud in higher education, we need to consider not only what IT might change but also what effects that change will have on IT itself — and especially on its role within colleges and universities and across higher education.

Individual Utility, Joint Action, and The Prisoner’s Dilemma

Photo of Ken ArrowBack in 1977, Ken Arrow, having won the Nobel Prize five years earlier, wondered about the internal functioning of firms. “To what extent is it necessary for the efficiency of a corporation,” he wrote, “that its decisions be made at a high level where a wide degree of information is, or can be made, available? How much, on the other hand, is gained by leaving a great deal of latitude to individual departments which are closer to the situations with which they deal, even though there may be some loss due to imperfect coordination?” The answer depends somewhat on whether the firm has one goal or several, on the correlation among multiple goals, and the degree to which different departments contribute to different goals.

In general, though, the answer is sobering for advocates of decentralization. The severally optimal choices of departments rarely combine to yield the jointly optimal choice for the overall enterprise. That’s not to say that centralization is wrong, of course. It merely means that one must balance the healthy and interesting diversity that results from decentralization against the overall inefficiency it can cause.

If we shift focus from the firm to enterprises within an economic sector, the same observations hold. To the extent enterprises pursue diverse goals primarily for their own benefit rather than for the efficiency of the entire sector, that sector will be both diverse and inefficient — perhaps to the extremes of idiosyncrasy and counterproductivity. Put differently, if the actors within a sector value individuality, they will sacrifice sector-wide efficiency; if they value sector-wide efficiency, they must sacrifice individuality.

Photo of Doc HoweHigher education traditionally has placed a high value on institutional individuality. Some years back a Harvard faculty colleague of mine, Harold “Doc” Howe II (who had been US Commissioner of Education under Lyndon Johnson), observed how peculiar it was that mergers and acquisitions were so rarely contemplated, let alone achieved, in higher education, even though by any rational analysis there were myriad opportunities for interesting, effective mergers. (Does the United States really need almost 4,000 nonprofit, degree-granting postsecondary institutions, not to mention 14,000 public school districts?) Among research universities, for example, Case Western Reserve University and Carnegie-Mellon University were two of the few successful mergers, there were some instances of acquisitions and subordinations (I’m not counting Brown/Pembroke, Columbia/Barnard, Tufts/Jackson, or their kin), and several prominent failures — for example, the failed attempts to merge the Cambridge anchors Harvard and MIT. (Wikipedia’s page on college mergers lists fewer than 100 mergers of any kind.)

Photo of Fermilab detectorIf higher education isn’t going to gain efficiency through institutional aggregation, then its only option is to do so through institutional collaboration. There are lots of good examples where this has happened: I’d include athletic leagues, part of whose purpose is to negotiate effectively with networks; library collaborations, such as OCLC, that seek to reduce redundant effort; research collaborations, such as Fermilab, through which institutions share expensive facilities; and IT collaborations, such as Internet2.

That last is a bit different from the others, in that involves a group of institutions joining forces to buy services together. Why is joint procurement like that so rare in US higher education? I think there are two tightly connected reasons:

  • US higher education has valued institutional individuality far more highly than collective efficiency — that is, it assigns less importance to collective utility (that’s a microeconomics term for the value an actor expects) than to individual utility.
  • Photo of Ryan OakesAt the same time, it has failed to make the critical distinction between what Ryan Oakes, of Accenture‘s higher-education practice, recently called “differentiating” activities (those on which institutions reasonably compete) and generic “non-differentiating” activities (those where differences among peers are irrelevant to success). As a result, institutions have behaved competitively in all but a few contexts, even in those non-differentiating areas where collaboration is the right answer.

Although it’s a bit of a caricature, the situation somewhat resembles the scenario for the Rand Corporation‘s 1950s-era game-theory test, The Prisoner’s Dilemma. Here’s a version from Wikipedia:

Two suspects are arrested by the police. The police have insufficient evidence for a conviction, and, having separated the prisoners, visit each of them to offer the same deal. If one testifies for the prosecution against the other (defects) and the other remains silent (cooperates), the defector goes free and the silent accomplice receives the full one-year sentence. If both remain silent, both prisoners are sentenced to only one month in jail for a minor charge. If each betrays the other, each receives a three-month sentence. Each prisoner must choose to betray the other or to remain silent. Each one is assured that the other would not know about the betrayal before the end of the investigation. How should the prisoners act?

Photo of Jake and EarlThe dilemma is this:

  • The optimal individual choice for each prisoner is to rat out the other — that is, to “defect” — since this guarantees him or her a sentence of no more than three months, with a shot at freedom if the other prisoner remains silent. Individuals seeking to maximize their own success (to make a “utility-maximizing rational choice”, in microeconomic terms) thus choose to defect. In decision-analytic terms, since prisoner A has no idea what prisoner B will do, A assigns a probability of .5 to each possible choice B might make. A multiplies those probabilities by the consequences to obtain the expected values of his or her two options: (3)(.5)+(0)(.5) = 1.5 months for defecting, and (12)(.5)+(1)(.5) = 6.5 months for cooperating. A chooses to defect. B does the same calculation, and also chooses to defect. Since both choose to defect, each gets a three-month sentence, and they serve a total of six months in jail.
  • The optimal choice for the two prisoners together, as measured by the total of their two sentences, is for both to remain silent, that is, to cooperate. This yields a total sentence of one month for each prisoner, or a total of two months total. In contrast, defect/cooperate and cooperate/defect each yield twelve months (one year for one prisoner, freedom for the other) and defect/defect yields six months (three months for each). So the best joint choice is for A and B both to remain silent.

So each prisoner acting in his or her own self interest yields more individual and total prison time than each acting for their joint good — each would serve three months rather than one. But since A cannot know that B will cooperate and vice versa, each of them chooses self interest, and both end up worse off.

Let's Make a DealThe situation isn’t quite the same for several colleges that might negotiate together for a good deal from a vendor, mostly because no one will get anything for free. But a problem like the prisoner’s dilemma arises when one or more members of the group conclude that they can get a better deal from the vendor by themselves than what they think the group would obtain. If those members try to cut side deals, the incentive for the vendor to deal with the other members shrinks, especially if the defecting members’ deals consume a substantial fraction of the vendor’s price flexibility. The vendor prefers doing a couple of side deals to the overall deal so long as the side deals require less total discount than the group deal would. Members have every incentive to cut side deals, vendors prefer a small number of side deals to a blanket deal, and so unless all the colleges behave altruistically a joint deal is unlikely.

TV Guide coverAnd so the $64 question: What would break this cycle? The answer is simple: sharing information, and committing to joint action. If the prisoners could communicate before deciding whether to defect or cooperate, their rational choice would be to cooperate. If colleges shared information about their plans and their deals, the likelihood of effective joint action would increase sharply. That would be good for the colleges and not so good for the vendor. From this perspective, it’s clear why non-disclosure clauses are so common in vendor contracts.

In the end, the only path to effective joint action is a priori collaboration — that is, agreeing to pool resources, including clout and information, and work together for the common good. So long as colleges and universities hold back from collaboration (for example, saying, as about 15% of respondents did in a recent EDUCAUSE survey, that their institutions would wait to see what others achieved before committing to collaboration), successful joint action will remain difficult.

GoTo, Gas Pedals, & Google: What Students Should Know, and Why That’s Not What We Teach Them

In the 1980s I began teaching a course in BASIC programming in the Harvard University Extension, part of an evening Certificate of Advanced Study program for working students trying to get ahead. Much to my surprise, students immediately filled the small assigned lecture hall to overflowing, and nearly overwhelmed my lone teaching assistant.

Within two years, the course had grown to 250+ students. They spread throughout the second-largest room in the Harvard Science Center (Lecture Hall C– the one with the orange seats, for those of you who have been there). I now had a dozen TAs, so I was in effect not only teaching the BASIC course, but also leading a seminar on the pedagogical challenge of teaching non-technical students how to write structured programs in a language that heretically allowed “GoTo” statements.

Computer Literacy?

There’s nothing very interesting or exciting about learning to program in BASIC. Although I flatter myself a good teacher, even my best efforts to render the material engaging – for example, assignments that variously involved having students act out various roles in Stuart Madnick’s deceptively simple Little Man Computer system, automating Shirley Ellis‘s song The Name Game, and modeling a defined-benefit pension system – in no way explained the course’s popularity.

So what was going on? I asked students why they were taking my course. Most often, they said something about “computer literacy”. That’s a useful (if linguistically confused) term, but in this case a misleading one.

If the computer becomes important, the analogy seems to run, then the ability to use a computer becomes important, much as the spread of printed material made reading and writing important. So far so good. For the typical 1980s employee, however, using computers in business and education centered on applications like word processors, spreadsheets, charts, and maybe statistical packages. Except for those within the computer industry, it rarely involved writing code in high-level languages.

BASIC programming thus had little direct relevance to the “computer literacy” students actually needed. The print era made reading and writing important  for the average worker and citizen. But only printers needed adeptness with the technologies of paper, ink, composition (in the Linotype sense), and presses. That’s why the analogy fails: programming, by the 1980s, was about making computer applications, not using them. That’s the opposite of what students actually needed.

Yet clearly students viewed the ability to program in BASIC – even “Shirley Shirley bo-birley…” – as somehow relevant to the evolving challenges of their jobs. If BASIC programming wasn’t directly relevant to actual computer literacy, why did they believe this? Two explanations of its indirect importance suggest themselves:

  • Perhaps ability to program was an accessible indicator of more relevant yet harder-to-measure competence. Employers might have been using programming ability, however irrelevant directly, as a shortcut measure to evaluate and sort job applicants or promotion candidates. (This is essentially a recasting of Lester Thurow‘s “job queues” theory about the relationship between educational attainment and hiring, namely that educational attainment signals the ability to learn quickly rather than provides direct training.) Applicants or employees who believed this was happening would thus perceive programming ability as a way to make themselves appear attractive, even though the skill was actually irrelevant.
  • Perhaps students learned to program simply to gain confidence that they could cope with the computer age.

I propose a third explanation:

  • As technology evolves, generations that experience the evolution tend to believe it important for the next generation to understand what came before, and advise students accordingly.

That is, we who experience technological change believe that competence with current technology benefits from understanding prior technology – a technological variant of George Santayana’s aphorism “Those who cannot remember the past are condemned to repeat it” – and send myriad direct and indirect messages to our successors and students that without historical understanding one cannot be fully competent.

Shifting Gears

My father taught me to drive on the family’s 1955 Chevy station wagon, a six-cylinder car with a three-speed, non-synchromesh, stalk-mounted-shifter manual transmission and power nothing. After a few rough sessions learning to get the car moving without bucking and stalling, to turn and shift at the same time, and to double-clutch and downshift while going downhill, I became a pretty good driver.

But my father, who had learned to drive on a Model T Ford with a planetary transmission and separate throttle and spark-advance controls, remained skeptical of my ability. He was always convinced that since I didn’t understand that latter distinction, I really wasn’t operating the car as well as I might. (Today’s “accelerator”, if I understand it correctly, combines the two functions: it tells the engine to spin faster, which is what the spark-advance lever did, and then feeds it the necessary fuel mixture, which was the throttle’s function.)

Years later it came time for our son’s first driving lesson. We were in our automatic-transmission Toyota Camry, equipped with power steering and brakes, on a not-yet-opened Cape Cod subdivision’s newly paved streets. Apparently forgetting how irrelevant the throttle/spark distinction had been to my learning to drive, I delivered a lecture on what was going on in the automatic transmission – why it didn’t need a clutch, how it was deciding when to shift gears, and so forth. Our son listened patiently, and then rapidly learned to drive the Camry very well without any regard to what I’d explained. My lecture had absolutely no effect on his competence (at least not until several years later, I like to believe, when he taught himself to drive a friend’s four-in-the-floor VW).

Technological Instruction

Which brings me to the present, and the challenge of preparing today’s students for tomorrow’s technological workplaces. What should be our advice to them be, either explicitly – in the form of direct instruction or requirements – or implicitly, in the form of contextual guidance such as induced so many students to take my BASIC course? In particular, how can we break away from the generational tendency to emphasize how we got here rather than where we’re going?

I don’t propose to answer that question fully here, but rather to sketch, though two examples, how a future-oriented perspective might differ from a generational one. The first example is cloud services, and the second example is online information.

Cloud Services

I started writing this essay on my DC office computer. I’m typing these words on an old laptop I keep in my DC apartment, and I’ll probably finish it on my traveling computer or perhaps on my Chicago home computer. A big problem ensues: How do I keep these various copies synchronized? My answer is a service called Dropbox, which copies documents I save to its central servers and then disseminates them automatically to all my other computers and even my phone. What I need is to have the same document up to date wherever I’m working. Dropbox achieves this by synchronizing multiple copies of the same documents across multiple computers and other devices.

Alternatively, I might gotten what I need– having the same document up to date wherever I’m working– by drafting this post as a Google or Live document. Rather than editing different synchronized copies of the document, I’d actually have been editing the same remote document from different computers rather than synchronizing local copies among those computers.

My instincts are that this difference between synchronized and remote documents is important, something that I, as an educator, should be sure the next generation understands. When my son asks about how to work across different machines, my inclination is to explain the difference between the options, how one is giving way to the other, and so forth. Is that valid, or is this the same generational fallacy that led my father to explain throttles and spark advance or me to explain clutches and shifting?

Online Information

When I came to the history quote above, I couldn’t remember its precise wording or who wrote it. That’s what the Internet is for, right? Finding information?

I typed “those who ignore the past are doomed”, which was the partial phrase I remembered, into Google’s search box. Among the first page of hits, the first time I tried this, were links to answers.com, wikiquote.org, answers.google.com, wikipedia.org, and www.nku.edu. The first four of those pointed me to the correct quote, usually giving the specific source including edition and page. The last, from a departmental web page at Northern Kentucky University, blithely repeated the incorrect quote (but at least ascribed it to Santayana). One of the sources (answers.com) pointed to an earlier, similar quote from Edmund Burke. The Wikipedia entry reminded me that the quote is often incorrectly ascribed to Plato.

I then typed the same search into Bing’s search box. Many links on its first page of results were the same as Google’s — answers.com and wikiquotes — but there were more links to political comments (most of them embodying incorrect variations on the quote), and one link to a conspiracy-theorist page linking the Santayana quote to George Orwell’s “He who controls the present, controls the past. He who controls the past, controls the future”.

It wasn’t hard for me to figure out which search results to heed and which to ignore. The ability to screen search results and then either to choose which to trust or to refine the search is central to success in today’s networked world. What’s the best way to inculcate that skill in those who will need it?

I’ve been working in IT since before the Digital Equipment Corporation‘s Altavista, in its original incarnation, became the first Web search engine. The methods different search services use to locate and rank information have always been especially interesting. The early Altavista ranked pages based on how many times search words appeared in them – a method so obviously manipulable (especially by sneaking keywords into non-displayed parts of Web pages) that it rapidly gave way to more robust approaches. The links one gets from Google or Bing today come partly from some very sophisticated ranking said to be based partly on user behavior (such as whether a search seems to have succeeded) and partly on links among sites (this was Google’s original innovation, called PageRank) – but also, quite openly and separately, from advertisers paying to have their sites displayed when users search for particular terms.

Here again the generational issue arises. Obviously we want to teach future generations how to search effectively, and how to evaluate the quality and reliability of the information their searches yield. But do we do this by explaining the evolution of search and ranking algorithms – the generational approach based on the preceding paragraph – or by teaching more generally, as bibliographic instructors in libraries have long done, how to cross-reference, assess, and evaluate information whatever its form?

Understanding throttles and spark advance did not help me become a better driver, understanding BASIC probably didn’t help prepare my Harvard students for their future workplaces, and explaining diverse cloud mechanisms and search algorithms isn’t the best way for us to maximize our students’ technological competence. Much as I love explaining things, I think the essence of successful technological teaching is to focus on the future, on the application and consequences of technology rather than its origins.

That doesn’t mean that we should eschew the importance of history, but rather than history does not suffice as a basis for technological instruction. It’s easier to explain the past than to anticipate the future, but that last, however risky and uncertain and detached from our personal histories, is our job.

Network Neutrality: Who’s Involved? What’s the Issue? Why Do We Give a Shortstop?

Who’s on First, Abbott and Costello’s classic routine, first reached the general public as part of the Kate Smith Radio Hour in 1938. It then appeared on almost every radio network at some time or another before reaching TV in the 1950s. (The routine’s authorship, as I’ve noted elsewhere, is more controversial than its broadcast history.) The routine can easily be found many places on the Internet – as a script, as audio recordings, or as videos. Some of its widespread availability is from widely-used commercial services (such as YouTube), some is from organized groups of fans, and some is from individuals. The sources are distributed widely across the Internet (in the IP-address sense).

I can easily find and read, listen to, or watch Who’s on First pretty much regardless of my own network location. It’s there through the Internet2 connection in my office, through my AT&T mobile phone, through my Sprint mobile hotspot, through the Comcast connections where I live, and through my local coffeeshops’ wireless in DC and Chicago.

This, most of us believe, is how the Internet should work. Users and content providers pay for Internet connections, at rates ranging from by buying coffee to thousands of dollars, and how fast one’s connection is thus may vary by price and location. One may need to pay providers for access, but the network itself transmits similarly no matter where stuff comes from, where it’s going, or what its substantive content is. This, in a nutshell, is what “network neutrality” means.

Yet network neutrality remains controversial. That’s mostly for good, traditional political reasons. Attaining network neutrality involves difficult tradeoffs among the economics of network provision, the choices available to consumers, and the public interest.

Tradeoffs become important when they affect different actors differently. That’s certainly the case for network neutrality:

  • Network operators (large multifunction ones like AT&T and Comcast, large focused ones like Verizon and Sprint, small local ones like MetroPCS, and business-oriented ones like Level3) want the flexibility to invest and charge differently depending on who wants to transmit what to whom, since they believe this is the only way to properly invest for the future.
  • Some Internet content providers (which in some cases, like Comcast, are are also networks) want to know that what they pay for connectivity will depend only on the volume and technical features of their material, and not vary with its content, whereas others want the ability to buy better or higher-priority transmission for their content than competitors get — or perhaps to have those competitors blocked.
  • Internet users want access to the same material on the same terms regardless of who they are or where they are on the network.

Political perspectives on network neutrality thus vary depending on who is proposing what conditions for whose network.

But network neutrality is also controversial because it’s misunderstood. Many of those involved in the debate either don’t – or won’t – understand what it means for a public network to be neutral, or indeed what the difference is between a public and a private network. That’s as true in higher education as it is anywhere else. Before taking a position on network neutrality or whose job it is to deal with it, therefore, it’s important to define what we’re talking about. Let me try to do that.

All networks discriminate. Different kinds of network traffic can entail different technical requirements, and a network may treat different technical requirements differently. E-mail, for example, can easily be transmitted in bursts – it really doesn’t matter if there’s a fifty-millisecond delay between words – whereas video typically becomes jittery and unsatisfactory if the network stream isn’t steady. A network that can handle email may not be able to handle video. One-way transmission (for example, a video broadcast or downloading a photo) can require very different handling than a two-way transmission (such as a videoconference). Perhaps even more basic, networks properly discriminate between traffic that respects network protocols – the established rules of the road, if you will – and traffic that attempts to bypass rule-based network management.

Network neutrality does not preclude discrimination. Rather, as I wrote above, a network behaves neutrally if it avoids discriminating on the basis of (a) where transmission originates, (b) where transmission is destined, and (c) the content of the transmission. The first two elements of network neutrality are relatively straightforward, but the third is much more challenging. (Some people also confuse how fast their end-user connection is with how quickly material moves across the network – that is, someone paying for a 1-megabit connection considers the Internet non-neutral if they don’t get the same download speeds as someone paying for a 26-megabit connection – but that’s a separate issue largely unrelated to neutrality.) In particular, it can be difficult to distinguish between neutral discrimination based on technical requirements and non-neutral discrimination based on a transmission’s substance.In some cases the two are inextricably linked.

Consider several ways network operators might discriminate with regard to Who’s on First.

  • Alpha Networks might decide that its network simply can’t handle video streaming, and therefore might configure its systems not to transmit video streams. If a user tries to watch a YouTube version of the routine, it won’t work if the transmission involves Alpha Networks. The user will still be able to read the script or listen to an audio recording of the routine (for example, any of those listed in the Media|Audio Clips section of http://www.abbottandcostello.net/). Although refusing to carry video is clearly discrimination, it’s not discrimination based on source, destination, or content. Alpha Networks therefore does not violate network neutrality.
  • Beta Networks might be willing to transmit video streams, but only from providers that pay it to do so. Say, purely hypothetically, that the Hulu service – jointly owned by NBC and Fox – were to pay Beta Networks to carry its video streams, which include an ad-supported version of Who’s on First. Say further that Google, whose YouTube streams include many Who’s on First examples, were to decline to pay. If Beta Networks transmitted Hulu’s versions but not Google’s, it would be discriminating on the basis of source – and probably acting non-neutrally.

What if Hulu and Google use slightly different video formats? Beta might claim that carrying Hulu’s traffic but not Google’s was merely technical discrimination, and therefore neutral. Google would probably disagree. Who resolves such controversies – market behavior, the courts, industry associations, the FCC – is one of the thorniest points in the national debate about network neutrality. Onward…

  • Gamma Networks might decide that Who’s on First ridicules and thus disparages St. Louis (many performances of the routine refer to “the St Louis team”, although others refer to the Yankees). To avoid offending customers, Gamma might refuse to transmit Who’s on First, in any form, to any user in Missouri. That would be discrimination on the basis of destination. Gamma would violate the neutrality principle.
  • Delta Networks, following Gamma’s lead, might decide that Who’s on First disparages not just St. Louis, but professional baseball in general. Since baseball is the national pastime, and perhaps worried about lawsuits, Delta Networks might decide that Who’s on First should not be transmitted at all, and therefore it might refuse to carry the routine in any form. That would be discrimination on the basis of content. Delta would be violating the neutrality principle.
  • Epsilon Networks, a competitor to Alpha, might realize that refusing to carry video disserves customers. But Epsilon faces the same financial challenges as Alpha. In particular, it can’t raise its general prices to cover the expense of transmitting video since it would then lose most of its customers (the ones who don’t care about video) to Alpha’s lesser but less expensive service. Rather than block video, Epsilon might decide to install equipment that will enable video as a specially provided service for customers who want it, and to charge those customers – but not its non-video customers – extra for the added capability. Whether an operator violates network neutrality by charging more for special network treatment of certain content – the usual term for this is “managed services” – is another one of the thorniest issues in the national debate.

As I hope these examples make clear, there are various kinds of network discrimination, and whether they violate network neutrality is sometimes straightforward and sometimes not.  Things become thornier still if networks are owned by content providers or vice versa – or, as is more typical, if there are corporate kinships between the two. Hulu, for example, is partly owned by NBC Universal, which is becoming part of Comcast. Can Comcast impose conditions on “outside” customers, such as Google’s YouTube, that it does not impose on its own corporate cousin?

Why do we give a shortstop (whose name, lest you didn’t read to the end of the Who’s on First script, is “darn”)? That is, why is network neutrality important to higher education? There are two principal reasons.

First, as mobility and blended learning (the combination of online and classroom education) become commonplace in higher education, it becomes very important that students be able to “attend” their college or university from venues beyond the traditional campus. To this end, it is very important that colleges and universities be able to provide education to their students and interconnect researchers over the Internet. This should be constrained only by the capacity of the institution’s connection to the Internet, the technical characteristics of online educational materials and environments, and the capacity of students’ connections to the Internet.

Without network neutrality, achieving transparent educational transmission from campus to widely-distributed students could become very difficult. The quality of student experience could come to depend on the politics of the network path from campus to student.To address this, each college and university would need to negotiate transmission of its materials with every network operator along the path from campus to student. If some of those network operators negotiate exclusive agreements for certain services with commercial providers – or perhaps with other colleges or universities – it could become impossible to provide online education effectively.

Second, many colleges and universities operate extensive networks of their own, or together operate specialized inter-campus networks for education, research, administrative, and campus purposes. Network traffic inconsistent with or detrimental to these purposes is managed differently than traffic that serves them. It is important that colleges and universities retain the ability to manage their networks in support of their core purposes.

Networks that are operated by and for the use of particular organizations, like most college and university networks, are private networks. Private and public networks serve different purposes, and thus are managed based on different principles. The distinction is important because the national network-neutrality debate – including the recent FCC action, and its evolving judicial, legislative, and regulatory consequences – is about public networks.

Private networks serve private purposes, and therefore need not behave neutrally. They are managed to advance private goals. Public networks, on the other hand, serve the public interest, and so – network-neutrality advocates argue – should be managed in accordance with public policy and goals. Although this seems a clear distinction, it can become murky in practice.

For example, many colleges and universities provide some form of guest access to their campus wireless networks, which anyone physically on campus may use. Are guest networks like this public or private? What if they are simply restricted versions of the campus’s regular network? Fortunately for higher education, there is useful precedent on this point. The Communications Assistance for Law Enforcement Act (CALEA), which took effect in 1995, established principles under which most college and university campus networks are treated as private networks – even if they provide a limited set of services to campus visitors (the so-called “coffee shop” criterion).

Higher education needs neutrality on public networks because those networks are increasingly central to education and research. At the same time, higher education needs to manage campus networks and private networks that interconnect them in support of education and research, and for that reason it is important that there be appropriate policy differentiation between public and private networks.

Regardless, colleges and universities need to pay for their Internet connectivity, to negotiate in good faith with their Internet providers, and to collaborate effectively on the provision and management of campus and inter-campus networks. So long as colleges and universities act effectively and responsibly as network customers, they need assurance that their traffic will flow across the Internet without regard to its source, destination, or content.

And so we come to the central question: Assuming that higher education supports network neutrality for public networks, do we care how its principles – that public networks should be neutral, and that private ones should be manageable for private purposes – are promulgated, interpreted, and enforced? Since the principles are important to us, as I outlined above, we care that they be implemented effectively, robustly, and efficiently. Since the public/private distinction seems to be relatively uncontroversial and well understood, the core issue is whether and how to address network neutrality for public networks.

There appear to be four different ideas about how to implement network neutrality.

  1. A government agency with the appropriate scope, expertise, and authority could spell out the circumstances that would constitute network neutrality, and prescribe mechanisms for correcting circumstances that fell short of those. Within the US, this would need to be a federal agency, and the only one arguably up to the task is the Federal Communications Commission. The FCC has acted in this way, but there remain questions whether it has the appropriate authority to proceed as it has proposed.
  2. The Congress could enact laws detailing how public networks must operate to ensure network neutrality. In general, it has proven more effective for the Congress to specify a broad approach to a public-policy problem, and then to create and/or empower the appropriate government agency to figure how what guidelines, regulations, and redress mechanisms are best. Putting detail into legislation tends to enable all kinds of special negotiations and provisions, and the result is then quite hard to change.
  3. The networking industry could create an internal body to promote and enforce network neutrality, with appropriate powers to take action when its members fail to live up to neutrality principles. Voluntary self-regulatory entities like this have been successful in some contexts and not in others. Thus far, however, the networking industry is internally divided as to the wisdom of network neutrality, and without agreement on the principle it is hard to see how there could be agreement on self-regulation.
  4. Network neutrality could simply be left to the market. That is, if network neutrality is important to customers, they will buy services from neutral providers and avoid services from non-neutral providers. The problem here is that network neutrality must extend across diverse networks, and individual consumers – even if they are large organizations such as many colleges and universities – interact only with their own “last mile” provider.

Those of us in higher education who have been involved in the network-neutrality debates have come to believe that among these four approaches the first is most likely to yield success and most likely to evolve appropriately as networking and its applications evolve. This is especially true for wireless (that is, cellular) networking, where there remain legitimate questions about what level of service should be subject to neutrality principles, and what kinds of service might legitimately be considered managed, extra-cost services.

In theory, the national debate about network neutrality will unfold through four parallel processes. Two of these are already underway: the FCC has issued an order “to Preserve Internet Freedom and Openness”, and at least two network operators have filed lawsuits challenging the FCC’s authority to do that. So we already have agency and court involvement, and we can possiible congressional actions and industry initiatives to round out the set.

One thing’s sure: This is going to become more complicated and confusing…

Lou: I get behind the plate to do some fancy catching, Tomorrow’s pitching on my team and a heavy hitter gets up. Now the heavy hitter bunts the ball. When he bunts the ball, me, being a good catcher, I’m gonna throw the guy out at first base. So I pick up the ball and throw it to who?

Bud: Now that’s the first thing you’ve said right.

Lou: I don’t even know what I’m talking about!

Change Rewards, Change Leadership

A few nights ago, at a meeting of IT people from a subset of research universities, dinner conversation turned to why IT people work in higher education. If you do IT in higher education, everyone pretty much agreed, it’s not to get rich. Rather, you’re driven by some combination of four reasons.

  1. IT jobs in higher education have tended to entail a complicated, challenging, and rewarding set of challenges. IT organizations in colleges and universities tend to be relatively small, and tend to involve teamwork across domains that might otherwise be separate. There’s not the neat division between, say, technical and support staff that one might find elsewhere, and so there has been ample opportunity to grow along various dimensions.
  2. IT jobs in higher education have generally involved a certain amount of flexibility. One spends most of one’s time on one’s job, but it’s been commonplace for a certain fraction of time – perhaps as much as 20% — to be essentially the staffer’s to allocate. Much of that time has gone to experimenting with new technologies, or ways to use old technologies, or even to thinking about things not strictly technological. In the aggregate, much of that “uncommitted” time has gone to innovation, some successful and some not. But the opportunity to spend time on activities that are not strictly part of job descriptions, with part of that involving experimentation and innovation, has enabled IT organizations in higher education to be sources of administrative, educational, research, and technological progress.
  3. IT in higher education has usually provided good job security. Absent failure to produce or malfeasance, IT staff in education could reasonably assume that they would have jobs so long as they continued to perform well.
  4. This one’s somewhat different from the other three reasons. IT jobs in higher education have been an interesting and useful way station for recent graduates with abundant energy and skill but little sense of career or personal-development paths. For many IT staff, the job is not an end in itself, but rather a reasonable way to work for a few years while figuring out what to do next – be that graduate school, marriage and family, relocation, a more remunerative job, or whatever. That is, part of what appeals is working where one recently got – or is soon getting – a degree.

For reasons good and bad, the reasons to work in higher-education IT work have eroded, without commensurate offsets such as better pay. This has had little effect on those who work in higher-education IT for reason #4. So long as economic downturns have affected the corporate and .com worlds more than colleges and universities, it also has had little effect on those who work for #1-3. Since the economy has been on a bit of a roller coaster for some years, we’ve thus seen little erosion of IT staffing in higher education.

But as financial strictures settle firmly into higher education, this is changing. There have been greater compartmentalization of tasks, tighter accounting for time and effort, and layoffs unrelated to performance. These directly undercut reasons #1 through #3 for working in higher-education IT, and they’re beginning to have effects on staffing.

What many perceive is a key shift in the higher-education IT workforce: fewer young staff stick around to rise through the ranks, and loss rates do not reverse with age as they once did. The staff-by-age distribution is becoming bimodal, and whereas the left-hand mode is staying put or moving left – that is, staff who leave are leaving sooner – the right-hand mode, starved for replenishment from the middle, is moving right. As a result, higher-education IT organizations are increasingly starved for middle management, and as a second-order consequence they are decreasingly able to fill leadership positions from within. We therefore see more middle-management and leadership hires from the corporate sector.

Those new hires, unaccustomed to reasons #1-4, are likely to magnify rather than counteract the changes that underlay their hiring. The result, if we can avoid a vicious circle, is that non-pecuniary reasons for working in higher-education IT will give way to pecuniary ones. IT staff in colleges and universities will have more focused, less flexible, and less secure jobs, but they will be paid more to do them. If productivity under the new regime grows sufficiently to offset higher IT payrolls (which may mean that staffing levels decline), then the evolution can succeed. If it doesn’t, then increased spending will fail to yield commensurate progress.

Let’s assume, for the moment, that the evolution is both desirable and successful. What issues might we need to address? Here are five for starters:

  1. Whether we like it or not, colleges and universities operate with cultures that are very different from corporate cultures. If staff are hired from the latter, then we need to find effective ways to quickly give them an understanding of the former. This doesn’t mean that outside hires must go native, accepting their new culture as gospel, but rather than they must understand the status quo if they are to change it.
  2. If pay does become a dominant reason for people to work in higher-education IT, then colleges and universities must adopt modern approaches to compensation: bonuses for effective work, putting some compensation formally at risk, direct connections between performance reviews and compensation, benefits suited to staff needs, and most important compensation levels that truly compare favorably with the outside market.
  3. Even if we successfully integrate and compensate staff from outside higher education, inevitably staff turnover will be higher than it has been in the past: that’s another attribute of corporate IT cultures. This means that college and university IT organizations can no longer rely on long-term employees as the repositories of accumulated experience. Rather, they must adopt formal mechanisms for reaching, making, and recording decisions, for documenting implementation and change, and generally for ensuring that wisdom survives turnover.
  4. Idiosyncrasy, long the hallmark of higher-education IT and in many cases the guarantor of continued employment, will give way to standardization. The default for decisions whether to outsource will no longer be “no”. The burden of proof will shift to those opposing outsourcing, and there will be increased scrutiny of “we’ve always done it this way” arguments.
  5. Partly as a result of #4, possibilities for inter-institutional collaboration and joint procurement should expand. In some cases this will lead logically to joint-development efforts, especially where higher education has unique needs (for example, student systems, learning-management systems, and many research applications). In other cases it will lead logically to demand aggregation, especially where higher-education needs are consistent, they resemble those outside the academy, and they yield no competitive advantage (for example, email systems or cloud-based storage).

Many of those currently preparing to become leaders within higher-education IT are ill prepared to address issues like these. Many of those who come from outside to take leadership positions in higher-education IT do not understand why issues like these are hard to address in higher education. The solution, as with many of the changes we face, is serious, deep-thinking professional development: places where those rising or jumping into college or university IT leadership can learn what the future is bringing and how to address it. These can be didactic, like EDUCAUSE’s management and leadership institutes, or collaborative, like the Common Solutions Group, the Council of Liberal Arts Colleges, the League for Innovation, and other entities where peers gather to share experiences and best practices.

In the past we’ve built leaders informally, by drawing them from those who have been with us for a long time. As that pool dries up, we need to think differently.

The Era of Control: It’s Over

Remember Attack Plan R? That’s the one that enabled Brigadier General Ripper to bypass General Turgidson and the rest of the Air Force chain of command, sending his bombers on an unauthorized mission to attack the Soviet Union. As a result of Plan R and two missed communications – the Soviets’ failure to announce the Doomsday Device and an encrypted radio’s failure to recall Major Kong’s B-52 – General Ripper ended the world and protected our precious bodily fluids, Colonel Mandrake’s best efforts notwithstanding.

Fortunately, we’re talking about Sterling Hayden, George C. Scott, Slim Pickens, and Peter Sellers in Stanley Kubrick’s Dr. Strangelove, or How I Learned to Stop Worrying and Love the Bomb, and not an actual event. But it’s a classic illustration why control of key technologies is important, and it’s the introduction to my theme for today: we’re losing control of key technologies, which is important and has implications for how campus IT leaders do their jobs. We can afford to lose control; we can’t afford to lose influence.

Here’s how IT was on most campuses until about ten years ago:

  • There were a bunch of central applications: a financial system, a student-registration system, some web servers, maybe an instructional-management system, and then some more basic central facilities such as email and shared file storage.
  • Those applications and services were all managed by college or university employees – usually in the central IT shop, but sometimes in academic or administrative units – and they ran on mainframes or servers owned and operated likewise.
  • Students, faculty, and staff used the central applications from desktop and laptop computers. Amazing as it seems today, students often obtained their computers from the campus computer reseller or bookstore, or at least followed the institution’s advice if they bought them elsewhere. Faculty and staff computers were very typically purchased by the college or university and assigned to users. In all three cases, a great deal of the software on users’ computers was procured, configured, and/or installed by college or university staff.
  • Personal computers and workstations connected to central applications and services over the campus network. The campus network, mostly wired but increasingly wireless, was provided and managed by the central IT organization, or in some cases by academic units.
  • Campus telephony consisted primarily of office, dormitory, and “public” telephone sets connected to a campus telephone exchange, which was either operated by campus staff or operated by an outside vendor under contract to the campus.

Notice the dominant feature of that list: all the key elements of campus information technology were provided, controlled, or at least strongly guided by the college or university, either through its central IT organization or through academic or administrative units.

There were exceptions to campus control over information technology, to be sure. The World Wide Web was already a major source of academic information. Although some academic material on the web came from campus servers, most of it didn’t. There were also many services available over the web, but most campus use of these was for personal rather than campus purposes (banking, for example, and travel planning). Most campus libraries relied heavily on outside services, some accessible through the Web and some through other client/server mechanisms. And many researchers, especially in physics and other computationally-intensive disciplines, routinely used shared computational resources located in research centers or on other campuses. Regardless,  a decade ago central or departmental campus IT organizations controlled most campus IT.

Now fast forward to the present, when things are quite different. The first difference is complexity. For example, I described my own IT environment in a post a few weeks ago:

I started typing this using Windows Live Writer (Microsoft) on the Windows 7 (Microsoft, but a different part) computer (Dell) that is connected to the network (Comcast, Motorola, Cisco) in my home office, and it’ll be stored on my network hard drive (Seagate) and eventually be posted using the blog service (WordPress) on the hosting service (HostMonster) where my website and blog reside. I’ll keep track of blog readers using an analytic service (Google), and when I’m traveling I might correct typos in the post using another blog editor (BlogPress) on my iPhone (Apple) communicating over cellular (AT&T) or WiFi (could be anyone) circuits. Much of today’s IT is like this: it involves user-owned technologies like computers and phones combined in complicated ways with cloud-based services from outside providers. We’re always partly in the cloud, and partly on the ground, partly in control and partly at the mercy of providers.

Mélanges like this are pretty much the norm these days. And not just when people are working from home as I often do: the same is increasingly true at the office, for faculty and staff, and in class and dorms, for students. It’s true even of the facilities and services we might still think of as institutional: administrative systems and core services, for example.

The mélanges are complex, as I already pointed out, but their complexity goes beyond technical integration. That’s the second difference between the past and the present: not only does IT involve a lot of interrelated yet distinct pieces, but the pieces typically come from outside providers – the cloud, in the current usage. Those outside providers operate outside campus control.

Group the items I listed in my ten-years-ago list above into four categories: servers and data centers, central services and applications, personal computers and workstations, and the networks that interconnect them. Now consider how IT is on today’s campus:

  • Instead of buying new servers and housing them in campus buildings, campuses increasingly create servers within virtualization environments, house those environments in off-campus facilities perhaps provided by commercial hosting firms, and use the hosting firm’s infrastructure rather than their own.
  • A rapidly growing fraction of central services is outsourced, which means not only that servers are off campus and proprietary, but that the applications and services are being administered by outside firms.
  • Although most campuses still operate wired and wireless networks, users increasingly reach “campus” services through smartphones, tablets, web-enabled televisions, and portable computers whose default connectivity is provided by telephone companies or by commercial network providers.
  • Those smartphones, tablets, and televisions, and even many of the portable computers, are chosen, procured, configured, and maintained individual users, not by the institution.

This migration from institutional to external or personal control is what we mean by “cloud services”. That is, the key change is not the location of servers, the architecture of applications, the management of networks, or the procurement of personal computers. Rather, it is our willingness to give up authority over IT resources, to trade control for economy of scale and cost containment. It’s Attack Plan R.

The tradeoff has at least four important consequences. Three of these have received widespread attention, and I won’t delve into them here: ceding control to the cloud

  • can introduce very real security, privacy, and legal risks,
  • can undercut long-established mechanisms designed to produce effective user support at minimal cost, and
  • can restructure IT costs and funding mechanisms.

I want to focus here on the fourth consequence. Ceding control to the cloud necessarily entails a fundamental shift in the role of central and departmental IT organizations. This shift requires CIOs and IT staff to change their ways if they are to continue being effective – being Rippers, Kongs, or Turgidsons won’t work any more.

Most important, cloud-driven loss of campus control over the IT environment means that organizational and management models based on ownership, faith, authority, and hierarchy – however benevolent, inclusive, and open – will give way to models based on persuasion, negotiation, contracting, and assessment. It will become relatively less important for CIOs to be skilled at managing large organizations, and more important for them to know how to define, specify, and measure costs and results and to negotiate intramural agreements and extramural contracts consistently. Cost-effective use of cloud services also requires standardization, since nothing drives costs up and vendors away quite so quickly as idiosyncrasy. Migration to the cloud thus requires that CIOs understand emerging standards, especially for database schemas, security models, virtualization, system interfaces, and on and on. It requires that CIOs understand that strength lies in numbers – especially in campuses banding together to procure services rather than in departures from the norm, however innovative.

Almost as important, much of what campuses now achieve through regulation they will need to achieve through persuasion – policy will give way to pedagogy as the dominant mechanism for guiding users and units. I serve on the operations advisory committee for a federal agency. To ensure data and program integrity, the agency’s IT organization wanted to revise policies to regulate staff use of personal computers and small mobile devices to do their work from home or while traveling. It became rapidly clear, as the advisory committee reacted to this, that although it was perfectly possible to write policies governing the use of personal and mobile devices, most of those policies would be ignored unless the agency mounted a major educational campaign. Then it became clear that if there were a major educational campaign, this would minimize the need for policy changes. This is the kind of transition we can imagine in higher education: we need to help users to do the right thing, not just tell them.

On occasion I’ve described my core management approaches as “bribery” and “conspiracy”. This meant that as CIO my job was was to make what was best for the university also be what was most appealing to individuals (that’s “bribery”), and that figuring out what was best for the university required discussion, collaboration, and agreement among a broad array of IT and non-IT campus leaders (“conspiracy”). As control gives way to the cloud for campus IT, these two approaches become equally relevant to vendor relations and inter-institutional joint action. We who provide information technology to higher education need to work together to ensure that as we lose control over IT resources we don’t also lose influence.

Reflections on CIOship, Part III

This is the third and final of my ruminations on this topic. Some years ago, about halfway through my UChicago CIOship, I wrote this:

…it is bad news when a dean at my university not only knows who I am and what I do, but doesn’t like it. Suppose the dean wants to buy nonstandard equipment through a nonstandard channel. On the university’s behalf, I object to those choices; I also believe that the entire purchase is unnecessary because the equipment would allow the dean’s school to duplicate existing services that the university provides centrally. The dean responds by arguing that the interests of individual schools should outweigh collective university interests, and that having a chief information officer — that’s me — serves no useful purpose. To the dean, my job seems to consist entirely of interfering in school affairs.

EDUCAUSE 2007 Conference, SeattleI quoted this passage from “A CIO’s Question: Will You Still Need Me When I’m 64?, a 2002 opinion piece in the Chronicle, in an EDUCAUSE presentation because it had gotten me in trouble. Although it was based on a particular dean in one institution, a different dean at a different institution thought it was about him, and wasn’t happy. The lesson I drew from that — that specificity is sometimes better than generality, and that readers identify with abstracted characters — remains important.

But that’s not my theme today. Rather, I want to revisit the arguments I made in “A CIO’s Question…”, and see how their validity has evolved. My core argument was that it remains important for colleges and universities to have a Chief Information Officer or CIO. By “CIO” I meant not necessarily someone with that exact title, but rather someone providing central, comprehensive leadership for the institution’s information-technology activities and policies.

Technologies have a lifecycle. Initial implementation often requires central initiative and guidance. In due course, however, many technologies mature sufficiently to no longer require central oversight. For example, “most colleges and universities now have pervasive networks,” I wrote, “accessible to everyone everywhere. Users interact directly with administrative and academic systems. E-mail, instant messaging, and cellphones are everyday tools. Information technology is a utility like electric power, available consistently and pervasively across most of higher education” — and therefore requiring crisp operational management rather than high-level leadership. I cited other examples of areas no longer requiring central attention: the promotion of instructional technologies, and the support of local computer users (more on that in my recent post “Help! I need somebody.“)

Yet there remain at least four good, interlinked reasons for colleges and universities to have CIOs, I wrote: properly integrating central systems, securing economies of scale in operations and procurement, promulgating and accepting standards to enable those first two, and advocating for strategic applications of IT across the institution’s academic and administrative domains.

These seem valid justifications for the CIO role today. Yet one hears challenges to traditional CIOship: distributing responsibility for IT across diverse separate entities, downgrading the role and importance of the CIO, and managing what remains of central IT as merely one utility among many.

It’s important to parse these trends carefully. What appears to be coherent movement in the wrong direction may actually be discrete movement in several right directions. Moreover, although it remains important that there be central leadership for institutional IT, it may no longer be important — or even feasible — that there be central control. (More on the demise of control in an upcoming post.)

As technologies mature, they enable decentralization. As the strategic importance of IT increases, it becomes dangerous for an institution to entrust it to a single individual, especially if that individual speaks the language of technology rather than the language of higher education, or if that individual is organizationally and geographically isolated. And a large fraction of IT is indeed a critical utility, one whose failure can jeopardize the institution, and so much of IT must be managed as such.

What appears to be the downgrading of IT may instead be an evolving recognition — however imperfect — of its increasing importance. Then again, it may be the downgrading of IT, or the political dilution of a CIO’s perceived power. We need to understand which is which: to think about the evolving CIO role thoughtfully rather than simplistically, rather than focus on its scope of authority, its precise location in organization charts, or its title and reporting line.

The challenge for a CIO is to ensure that his or her institution uses IT to its maximum long-term benefit. To achieve that, it’s important to hold onto what’s important in one’s particular institutional and technological context, while letting go of what isn’t.

This post, in somewhat modified and expanded form, appears as “Shrinking CIO?” in EDUCAUSE Review, vol. 46, no. 1 (January/February 2011)

Help! I need somebody. But whom?

My mother called a couple of weeks ago to say she was unable to use the Internet. She doesn’t make calls like this lightly – she’s a regular email user, does extensive research online about things ranging from food to politics to textiles, and learned to use instant messaging so she could stay in touch with her grandchildren. And she understands that her HP computer is connected to RoadRunner and thereby to the Internet. So this wasn’t just her being new to technology.

I figured that she must have somehow lost her network connection, and began walking her through some experiments (“Are the lights blinking on the modem?”, and so on). But I was over-analyzing the problem. To my mother, as to many users, the Internet isn’t a network. Rather, it’s the collection of sites and services she can reach through her web browser. The problem, it turned out, was that my mother had accidentally moved the Firefox shortcut off her desktop. No icon, no browser. No browser, no Internet. No Internet, get help.

My mother’s missing icon may seem a triviality, but it certainly wasn’t trivial to her, and I think the problem and how she handled it are instructive. Lots of services we take for granted these days don’t come from a single provider or involve a single application or technology. In fact, sometimes it’s not clear what’s a service, what’s an application, and what’s a technology. Mostly that works out okay. When it doesn’t, though, it’s very hard to figure out where the chain broke down. Even when we figure that out, it’s hard to get the provider that broke the chain to fess up and make things right.

Perhaps as a consequence, we IT folks get lots of calls like my mother’s from friends and relatives: “My printer stopped working. Can you tell me what to do?”, or “How come Aunt Jane didn’t receive my email?”, or “I just clicked on something in a website, and now my screen is full of pornography ads”, or “Is it safe to buy something from eBay?”, or “Can I get my email on my phone?”. Why, we wonder, are they calling us, and not whom they’re supposed to call?

That’s the text today’s rumination: What’s the right way to think about help desks?

I started typing this using Windows Live Writer (Microsoft) on the Windows 7 (Microsoft, but a different part) computer (Dell) that is connected to the network (Comcast, Motorola, Cisco) in my home office, and it’ll be stored on my network hard drive (Seagate) and eventually be posted using the blog service (WordPress) on the hosting service (HostMonster) where my website and blog reside. I’ll keep track of blog readers using an analytic service (Google), and when I’m traveling I might correct typos in the post using another blog editor (BlogPress) on my iPhone (Apple) communicating over cellular (AT&T) or WiFi (could be anyone) circuits.

Much of today’s IT is like this: it involves user-owned technologies like computers and phones combined in complicated ways with cloud-based services from outside providers. We’re always partly in the cloud, and partly on the ground, partly in control and partly at the mercy of providers. That’s great when it works. But when something doesn’t work, diagnosis can be fiendishly difficult – especially if one is 2,000 miles away and can’t see one’s mother’s screen.

My mother has a solution to this: she has a local kid on retainer, her “guru”. She calls him when something doesn’t work, and he comes over to diagnose and fix the problem. She called me because her guru was on vacation.

Why doesn’t my mother call the RoadRunner or HP help desks? She’s not alone in this: vendor help desks seem to be users’ last resort, rather than their first. Most of us would explain this in two words: “Silos” and “India”. Help desks grew up around specific technologies or applications. Users dealt with one help desk if their wired phone wasn’t working, another to straighten out a cellphone problem, another if their email was over quota, another to figure out how to do online journal transfers from their desks, another to tighten up computer security, and so on — hence, “Silos”. On the other end of the phone, service providers developed highly specialized help desks comprising inexpensive staff working from very specific databases of known problems and stock solutions (“India”).

This worked so long as technologies were actually separate. But in this age of everything connected to everything else, it doesn’t. Far too many help-desk interactions comprise a distant help-desk technician not finding the problem in the database, usually because the problem involves inter-application pathology. As a last but frequent resort, the help desk has the user reboot (or restore to factory specs, or whatever). Once users realize that help desks solve all problems with reboots, they don’t bother calling help desks. They reboot. When reboots don’t work, they call their friends and relatives.

We know how to fix this, right? Consolidate help desks, cross-train staff, and all will be well.

That hasn’t worked either. Cross-training staff to deal with complex inter-technology problems means we need really clever technology folks as help-desk staff, or even to divert developers and integrators to that task. Staff who rely on knowledge bases simply can’t cope with the typical cross-technology problem, as we know from the India silos. But hiring or diverting experts is expensive. To solve that problem, we implement first-tier triage, so that expensive staff only deal with expensive problems. But none of this works: there’s still too much rebooting, and users remain dissatisfied.

My mother has the right idea. Today’s help desks typically work for providers — be they commercial or internal to colleges and universities — and not for users. Help desks’ incentives are aligned with the providers’ bottom lines, not with understanding how their provider’s service fits into the user’s environment.

The solution, I propose, may be to have users own help desks.

What would that mean? In the typical college or university, there are help desks operated by service providers (both central and departmental), and there are help desks operated by non-provider departments. Users typically find the latter helpful and worthwhile, and complain extensively about the unresponsiveness of the former. The result is structural tension at the senior level between the overseers of non-provider help desks (typically deans) and overseers of provider help desks (typically a CIO, or whoever’s in charge of central IT). Usually the tension either goes unresolved or, especially when corporate-style consultants are involved, provider-based centralization wins.

Maybe that’s precisely the wrong answer. Perhaps campus IT service providers should not own help desks. Instead, colleges and universities might rely on entities close to users — administrative units, academic departments, centers, building management — to provide first-level support. Realigning help desks this way wouldn’t be easy. It would raise budgetary issues, governance issues, local-control issues, centralization issues. It would entail diseconomies of scale.

But help-desk staff would understand users’ circumstances holistically — much as my mother’s guru understands her environment . This would make them better IT diagnosticians and therapists. If users got better diagnosis and therapy for their IT problems, they’d be more productive. And those productivity gains might amply offset the increased costs of localized, realigned help desks.

Reflections on CIOship, Part II

Larry Kohlberg’s work, which I mentioned a couple of posts ago, was an example of what graduate students in my time called “Stage Quest”: the effort, following Jean Piaget’s work on cognitive development, to find and document inexorable stage sequences in all kinds of development, including political, organizational, and occupational. In due course it became clear that not all development occurs through an invariant succession of stages. Yet Stage Quest keeps creeping into all kinds of organizational thinking and advice, not least the piece I’m ruminating on today: “Follow the Money'” and Other Unsolicited Advice for CIOs, from 1999.

Over a recent weekend I learned how to drive this 1952 Ford tractor. It’s fun to drive vehicles other than one’s car — in high school, for example, I briefly drove a Greyhound bus around a parking lot (well, okay, it wasn’t actually Greyhound, since it was in Mexico, but it was the same kind of bus — and this was when diesel intercity buses still had manual transmissions) and a friend’s TR-3 with a gearshift one could pull out of the transmission. Later I drove small trucks across the country a few times, and a motorcycle maybe twice, and those little Disney cars, which, truth be told, seemed much faster and more exciting in Anaheim when I was a kid than our son found them when he tried them years later in Orlando.

But I digress. Driving a tractor turns out to be very different from all those other vehicles in one key respect: you pick your gear before you start out, and then you stick with it. If you want to change gears, you stop the tractor completely first, and start over. For a tractor to work without unexpected interruptions, you need to understand what the job is before you set out. If you stick with the wrong gear, the tractor stalls, and you fail.

Here’s my summary reflection on “Follow the Money…”: It’s too much based on Stage Quest, and not enough on a 1952 tractor. The article arose from a conversation with a longstanding CIO from another institution. “What advice,” he asked me, “would you give a new CIO?” We came up with a dozen categories:

  • Ends justify means,
  • Who’s the boss?,
  • Blood is thicker than water,
  • Round up the usual suspects,
  • Espouse the party line,
  • What’s the difference?,
  • Silence is golden,
  • Follow the money,
  • Timing is everything,
  • Consort with the enemy,
  • Practice what you preach, and
  • Be where you are, not where you were

The relative importance of voice and data networking has changed, and I was unduly optimistic about desktop support becoming simpler, but much of the advice remains sound — especially, as I wrote in Part I of CIOship, the advice to avoid hierarchical approaches to IT organization and its relationships with other campus entities, and to collaborate actively across institutional boundaries.

But, as I said above, the article seems in retrospect to be excessively based, albeit implicitly, on the Stage Quest assumption that CIOs face similar series of challenges in different institutions. This, in turn, implies that CIOs at early stages in the series can learn from those who have progressed to later stages. Yet it has become quite clear, especially over the past few years,  that higher-education CIOships come in distinct forms. Like the various jobs a tractor driver might tackle, each form of the job entails a different series of stages, and requires a different gear. Some CIOs are hired to clean up a mess, for example, some are hired to encourage IT-based innovation, some are hired to consolidate past gains, and some are hired to focus on a particular problem. What one might do to Consolidate will fail miserably if one’s job is Cleanup.

The advice missing from “Follow the money…” is this: it’s very important to know which CIO job one has, and therefore which suite of skills and actions — which gear, that is — the job requires. A CIO hired into a Cleanup job will succeed by doing different things than one hired into an Innovate job. Those require different approaches than Consolidate or Focus jobs. Advice, in short, must be taken only in context.

This may help explain the discomfort with CIOship evident in “A CIO’s Question: Will You Still Need Me When I’m 64?“, which I wrote five years later. I’ll revisit that in the next Reflections on CIOship post.

Reflections on CIOship, Part I

BaconHaving not been a CIO  for over a year, I’ve been thinking about the evolution of that role — whatever its associated title — in colleges and universities.

As some of you know, I’ve thought and written about CIOship before. In the spirit of Ruminations, I figured the right way to rethink about CIOship was to revisit how I’d thought about it before.

So in this and the next two posts I’ll revisit those earlier pieces, see whether they still make sense. Ignoring chronology, I start with a short piece from a collection in EDUCAUSE Review three years ago. My part was short, so I’ll just quote it in full.

The U.S. Military Academy is a highly centralized organization: an end-of-career superintendent manages a largely transient faculty and a hierarchical administration. Harvard, in contrast, is a highly decentralized organization: a president nudges and coaxes deans, who control most of the resources and who in turn nudge and coax department heads and faculty, who enjoy substantial autonomy.

Like most other colleges and universities, the University of Chicago operates between these extremes—or, more typically and problematically, tries to combine the two. The budget process is centralized, for example, but its product is a set of formulas outlining boundaries within which deans and vice presidents have great freedom. Similarly, the university claims to have centralized telecommunications procurement, but somehow cell phones aren’t included.

Even faculty hiring has both centralized and decentralized components, causing occasional tension between department heads and the provost’s office. Confusion results, especially regarding the processes for setting priorities, resolving conflicts, and negotiating trade-offs.

Senior leadership groups—an officers group, a deans group, and an executive budget committee—exist to resolve these tensions between decentralized and centralized goals and actions. To the extent that these groups evolve into collaborative teams, they work well for issues of general institutional importance. They work less well for issues that involve only a subset of units: IT and Facilities, with overlapping responsibilities for design and installation; intellectual and administrative units, with divergent goals for student life; or pairs of academic departments, with a need to exploit synergies. So I—like other vice presidents, as well as deans and colleagues—have lots of lunches at the Quadrangle Club, the standard place for University of Chicago administrators and faculty to conduct lunch meetings.

For those of us with sedentary habits and no willpower, lunch can be a problem. That’s especially true for me, since the Quad Club makes a delicious BLT wrap, which is in effect a handful of garnished bacon only minimally buffered by a thin wrapper—and I love bacon. Ordinarily, mindful of my waistline, I’d try to avoid Quad Club lunches. But who has lunch with whom—and, sometimes more important, who sees whom having lunch with whom and stops by the table—is very important to the university’s functioning.

An aggregation of dyadic administrative lunches helps us behave as though we are centralized, even as each of us jealously guards his or her decentralized authority. Our lunches don’t turn into negotiating sessions, and only rarely do concrete decisions emerge from them. Rather, lunches give us the opportunity to share thoughts, experiences, perspectives, enthusiasm, paranoia, gossip—the informal information about one another that enables us to negotiate, collaborate, complain, and respond appropriately when our domains do or should engage one another.

This coming year promises to be challenging at the university: an ambitious president, lots of new vice presidents, currents of organizational and cultural change, and many trade-offs to be negotiated. Some of these challenges will call for more centralization and others for more decentralization. Some will necessitate a lunch. In this, I think, we are typical. Bacon is good.

In addition to celebrating bacon, I proposed that  formal processes don’t suffice to manage colleges and universities, especially in times of change. A year later, the national economy fell apart as the credit bubble burst, and most institutions found themselves managing two kinds of change at once: the intellectual and programmatic expansion required by technological and social progress, plus the unwelcome shrinkage required to operate within drastically smaller and uncertain resources.

Stock Market 2008Some institutions have managed to adapt to these challenging circumstances without major dislocation. Others haven’t. It seems to me, based on lots of conversations with IT leaders from diverse institutions, that the existence of established, effective backchannel relationships is even more important in times of shrinkage than in times of growth.

Competition often dominates in times of growth. Competition really doesn’t require backchannels, in fact, backchannels can undercut competitive will. Shrinkage requires collaboration and negotiation, however, rather than competition. And collaboration and negotiation most definitely benefit from backchannels.

In other words, bacon is even more important today than it was in 2007. In the next episode: does it still make sense to follow the money?