Posts Tagged ‘Cloud’

The Rock, and The Hard Place

Looking into the near-term future—say, between now and 2020—we in higher education have to address two big challenges, both involving IT. Neither admits easy progress. But if we don’t address them, we’ll find ourselves caught between a rock and a hard place.

  • The first challenge, the rock, is to deliver high-quality, effective e-learning and curriculum at scale. We know how to do part of that, but key pieces are missing, and it’s not clear how will find them.
  • The second challenge, the hard place, is to recognize that enterprise cloud services and personal devices will make campus-based IT operations the last rather than the first resort. This means everything about our IT base, from infrastructure through support, will be changing just as we need to rely on it.

“But wait,” I can hear my generation of IT leaders (and maybe the next) say, “aren’t we already meeting those challenges?”

If we compare today’s e-learning and enterprise IT with that of the recent past, those leaders might rightly suggest, immense change is evident:

  • Learning management systems, electronic reserves, video jukeboxes, collaboration environments, streamed and recorded video lectures, online tutors—none were common even in 2000, and they’re commonplace today.
  • Commercial administrative systems, virtualized servers, corporate-style email, web front ends—ditto.

That’s progress and achievement we all recognize, applaud, and celebrate. But that progress and achievement overcame past challenges. We can’t rest on our laurels.

We’re not yet meeting the two broad future challenges, I believe, because in each case fundamental and hard-to-predict change lies ahead. The progress we’ve made so far, however progressive and effective, won’t steer us between the rock of e-learning and the hard place of enterprise IT.

The fundamental change that lies ahead for e-learning
is the the transition from campus-based to distance education

Back in the 1990s, Cliff Adelman, then at the US Department of Education, did a pioneering study of student “swirl,” that is, students moving through several institutions, perhaps with work intervals along the way,before earning degrees.

“The proportion of undergraduate students attending more than one institution,” he wrote, “swelled from 40 percent to 54 percent … during the 1970s and 1980s, with even more dramatic increases in the proportion of students attending more than two institutions.” Adelman predicted that “…we will easily surpass a 60 percent multi-institutional attendance rate by the year 2000.”

Moving from campus to campus for classes is one step; taking classes at home is the next. And so distance education, long constrained by the slow pace and awkward pedagogy of correspondence courses, has come into its own. At first it was relegated to “nontraditional” or “experimental” institutions—Empire State College, Western Governors University, UNext/Cardean (a cautionary tale for another day), Kaplan. Then it went mainstream.

At first this didn’t work:, for example, a collaboration among several first-tier research universities led by Columbia, found no market for its high-quality online offerings. (Its Executive Director has just written a thoughtful essay on MOOCs, drawing on her experience.)

Today, though, a great many traditional colleges and universities successfully bring instruction and degree programs to distant students. Within the recent past these traditional institutions have expanded into non-degree efforts like OpenCourseWare and to broadcast efforts like the MOOC-based Coursera and edX. In 2008, 3.7% of students took all their coursework through distance education, and 20.4% took at least one class that way.

Learning management systems, electronic reserves, video jukeboxes, collaboration environments, streamed and recorded video lectures, online tutors, the innovations that helped us overcome past challenges—little of that progress was designed for swirling students who do not set foot on campus.

We know how to deliver effective instruction to motivated students at a distance. Among policy issues we have yet to resolve, we don’t yet know how to

  • confirm their identity,
  • assess their readiness,
  • guide their progress,
  • measure their achievement,
  • standardize course content,
  • construct and validate curriculum across diverse campuses, or
  • certify degree attainment

in this imminent world. Those aren’t just IT problems, of course. But solving them will almost certainly challenge IT.

The fundamental change that lies ahead for enterprise technologies
is the transition from campus IT to cloud and personal IT

The locus of control over all three principal elements of campus IT—servers and services, networks, and end-user devices and applications—is shifting rapidly from the institution to customers and third parties.

As recently as ten years ago, most campus IT services, everything from administrative systems through messaging and telephone systems to research technologies, were provided by campus entities using campus-based facilities, sometimes centralized and sometimes not. The same was true for the wired and then wireless networks that provided access to services, and for the desktop and laptop computers faculty, students, and staff used.

Today shared services are migrating rapidly to servers and systems that reside physically and organizationally elsewhere—the “cloud”—and the same is happening for dedicated services such as research computing. It’s also happening for networks, as carrier-provided cellular technologies compete with campus-provided wired and WiFi networking, and for end-user devices, as highly mobile personal tablets and phones supplant desktop and laptop computers.

As I wrote in an earlier post about “Enterprise IT,” the scale of enterprise infrastructure and services within IT and the shift in their locus of control have major implications for and the organizations that have provided it. Campus IT organizations grew up around locally-designed services running on campus-owned equipment managed by internal staff. Organization, staffing, and even funding models ensued accordingly. Even in academic computing and user support, “heavy metal” experience was valued highly. The shifting locus of control makes other skills at least as valuable: the ability to negotiate with suppliers, to engage effectively with customers (indeed, to think of them as “customers” rather than “users”), to manage spending and investments under constraint, to explain.

To be sure, IT organizations still require highly skilled technical staff, for example to fine-tune high-performance computing and networking, to ensure that information is kept secure, to integrate systems efficiently, and to identify and authenticate individuals remotely. But these technologies differ greatly from traditional heavy metal, and so must enterprise IT.

The rock, IT, and the hard place

In the long run, it seems to me that the campus IT organization must evolve rapidly to center on seven core activities.

Two of those are substantive:

  • making sure that researchers have the technologies they need, and
  • making sure that teaching and learning benefit from the best thinking about IT applications and effectiveness.

Four others are more general:

  • negotiating and overseeing relationships with outside providers;
  • specifying or doing what is necessary for robust integration among outside and internal services;
  • striking the right personal/institutional balance between security and privacy for networks, systems, and data; and last but not least
  • providing support to customers (both individuals and partner entities).

The seventh core activity, which should diminish over time, is

  • operating and supporting legacy systems.

Creative, energetic, competent staff are sine qua non for achieving that kind of forward-looking organization. It’s very hard to do good IT without good, dedicated people, and those are increasingly difficult to find and keep. Not least, this is because colleges and universities compete poorly with the stock options, pay, glitz, and technology the private sector can offer. Therein lies another challenge: promoting loyalty and high morale among staff who know they could be making more elsewhere.

To the extent the rock of e-learning and the hard place of enterprise IT frame our future, we not only need to rethink our organizations and what they do; we also need to rethink how we prepare, promote, and choose leaders for higher-education leaders on campus and elsewhere—the topic, fortuitously, of a recent ECAR report, and of widespread rethinking within EDUCAUSE.

We’ve been through this before, and risen to the challenge.

  • Starting around 1980, minicomputers and then personal computers brought IT out of the data center and into every corner of higher education, changing data center, IT organization, and campus in ways we could not even imagine.
  • Then in the 1990s campus, regional, and national networks connected everything, with similarly widespread consequences.

We can rise to the challenges again, too, but only if we understand their timing and the transformative implications.

Transforming Higher Education through Learning Technology: Millinocket?

Down East

Note to prospective readers: This post has evolved, through extensive revision and expansion and more careful citation, into a paper available at

You might want to read that paper, which is much better and complete, instead of this post — unless you like the pictures here, which for the moment aren’t in the paper. Even if you read this to see the pictures, please go read the other.

“Which way to Millinocket?,” a traveler asks. “Well, you can go west to the next intersection…” the drawling down-east Mainer replies in the Dodge and Bryan story,

“…get onto the turnpike, go north through the toll gate at Augusta, ’til you come to that intersection…. well, no. You keep right on this tar road; it changes to dirt now and again. Just keep the river on your left. You’ll come to a crossroads and… let me see. Then again, you can take that scenic coastal route that the tourists use. And after you get to Bucksport… well, let me see now. Millinocket. Come to think of it, you can’t get there from here.”

PLATO and its programmed-instruction kin were supposed to transform higher education. So were the Apple II, and then the personal computer – PC and then Mac – and then the “3M” workstation (megapixel display, megabyte memory, megaflop speed) for which Project Athena was designed. So were simulated laboratories, so were BITNET and then the Internet, so were MUDs, so was Internet2, so was artificial intelligence, so was supercomputing.

Each of these most certainly has helped higher education grow, evolve, and gain efficiency and flexibility. But at its core, higher education remains very much unchanged. That may no longer suffice.

What about today’s technological changes and initiatives – social media, streaming video, multi-user virtual environments, mobile devices, the cloud? Are they to be evolutionary, or transformational? If higher education needs the latter, can we get there from here?

It’s important to start conversations about questions like these from a common understanding of information technologies that currently play a role in higher education, what that role is, and how technologies and their roles are progressing. That’s what prompted these musings.

Information Technology

For the most part, “information technology” means a tripartite array of hardware and software:

  • end-user devices, which today range from large desktop workstations to small mobile phones, typically with some kind of display, some way to make choices and enter text, and various other capabilities variously enabled by hardware and software;
  • servers, which comprise not just racks of processors, storage, and other hardware but rather are aggregations of hardware, software, applications, and data that provide services to multiple users (when the aggregation is elsewhere, it’s often called “the cloud” today); and
  • networks, wireless or wired, which interlink local servers, remote server clouds, and end-user devices, and which typically comprise copper and glass cabling, routers and switches and optronics, and network operating system plus some authentication and logging capability.

Information technology tends to progress rapidly but unevenly, with progress or shortcomings in one domain driving or retarding progress in others.

Today, for example, the rapidly growing capability of small smartphones has taxed previously underused cellular networks. Earlier, excess capability in the wired Internet prompted innovation in major services like Google and YouTube. The success of Google and Amazon forced innovation in the design, management, and physical location of servers.

Perhaps the most striking aspects of technological progress have been its convergence and integration. Whereas once one could reasonably think separately about servers, networks, and end-user devices, today the three are not only tightly interconnected and interdependent, but increasingly their components are indistinguishable. Network switches are essentially servers, servers often comprise vast arrays of the same processors that drive end-user devices plus internal networks, and end-user devices readily tackle tasks – voice recognition, for example – that once required massive servers.

Access to Information Technology

Progress, convergence, and integration in information technology have driven dramatic and fundamental change in the information technologies faculty, students, colleges, and universities have. That progress is likely to continue.

Here, as a result, are some assumptions we can reasonably make today:

  • Households have some level of broadband access to the Internet, and at least one computer capable of using that broadband access to view and interact with Web pages, handle email and other messaging, listen to audio, and view videos of at least YouTube quality .
  • Teenagers and most adults have some kind of mobile phone, and that phone usually has the capability to handle routine Internet tasks like viewing Web pages and reading email.
  • Colleges and universities have building and campus networks operating at broadband speeds of at least 10Mb/sec, and most have wireless networks operating at 802.11b (11Mb/sec) or greater speed.
  • Server capacity has become quite inexpensive, largely because “cloud” providers have figured out how to gain and then sell economy of scale.
  • Everyone – or at least everyone between the ages of, say, 12 and 65 – has at least one authenticated online identity, including email and other online service accounts; Facebook, Twitter, Google, or other social-media accounts; online banking, financial, or credit-card access; or network credentials from a school, college or university, or employer.
  • Everyone knows how to search on the Internet for material using Google, Bing, or other search engines.
  • Most people have a digital camera, perhaps integrated into their phone and capable of both still photos and videos, and they know how to send them to others or offload their photos onto their computers or an online service.
  • Most college and university course materials are in electronic form, and so is a large fraction of library and reference material used by the typical student.
  • Most colleges and universities have readily available facilities for creating video from lectures and similarly didactic events, whether in classrooms or in other venues, and for streaming or otherwise making that video available online.

It’s striking how many of these assumptions were invalid even as recently as five years ago. Most of the assumptions were invalid a decade before that (and it’s sobering to remember that the “3M” workstation was a lofty goal as recently as 1980 and cost nearly $10,000 in the mid-1980s, yet today’s iPhone almost exceeds the 3M spec).

Looking a bit into the future, here are some further assumptions that probably will be safe:

  • Typical home networking and computers will have improved to the point they can handle streamed video and simple two-way video interactions (which means that at least one home computer will have an add-on or built-in camera).
  • Most people will know how to communicate with individuals or small groups online through synchronous social media or messaging environments, in many cases involving video.
  • Authentication and monitoring technologies will exist to enable colleges and universities to reasonably ensure that their testing and assessment of student progress is protected from fraud.
  • Pretty much everyone will have the devices and accounts necessary for ubiquitous connectivity with anybody else and to use services from almost any college, university, or other educational provider.

Technology, Teaching, and Learning

In colleges and universities, as in other organizations, information technology can promote progress by enabling administrative processes to become more efficient and by creating diverse, flexible pathways for communication and collaboration within and across different entities. That’s organizational technology, and although it’s very important, it affects higher education much the way it affects other organizations of comparable size.

Somewhat more distinctively, information technology can become learning technology, an integral part of the teaching and learning process. Learning technology sometimes replaces traditional pedagogies and learning environments, but more often it enhances and expands them.

The basic technology and middleware infrastructure necessary to enable colleges and universities to reach, teach, and assess students appears to exist already, or will before long. This brings us to the next question: What applications turn information technology into learning technology?

To answer this, it’s useful to think about four overlapping functions of learning technology.

Amplify and Extend Traditional Pedagogies, Mechanisms, and Resources

For example, by storing and distributing materials electronically, by enabling lectures and other events to be streamed or recorded, and by providing a medium for one-to-one or collective interactions among faculty and students, IT potentially expedites and extends traditional roles and transactions. Similarly, search engines and network-accessible library and reference materials vastly increase faculty and students access. The effect, although profound, nevertheless falls short of transformational. Chairs outside faculty doors give way to “learning management systems” like Blackboard or Sakai or Moodle, wearing one’s PJs to 8am lectures gives way to watching lectures from one’s room over breakfast, and library schools become information-science schools. But the enterprise remains recognizable. Even when these mechanisms go a step further, enabling true distance education whereby students never set foot on campus (in 2011, 3.7% of all students took all their coursework through distance education), the resulting services remain recognizable. Indeed, they are often simply extensions of existing institutions’ campus programs.

Make Educational Events and Materials Available Outside the Original Context

For example, the Open Courseware initiative (OCW) started as publicly accessible repository of lecture notes, problem sets, and other material from MIT classes. It since has grown to include similar material from scores of other institutions worldwide. Similarly, the newer Khan Academy has collected a broad array of instructional videos on diverse topics, some from classes and some prepared especially for Khan, and made those available for anyone interested in learning the material. OCW, Khan, and initiatives like them provide instructional material in pure form, rather than as part of curricula or degree programs.

Enable Experience-Based Learning 

This most productively involves experience that otherwise might have been unaffordable, dangerous, or otherwise infeasible. Simulated chemistry laboratories and factories were an early example – students could learn to synthesize acetylene by trial and error without blowing up the laboratory, or to fine-tune just-in-time production processes without bankrupting real manufacturers. As computers have become more powerful, so have simulations become more complex and realistic. As simulations have moved to cloud-based servers, multi-user virtual environments have emerged, which go beyond simulation to replicate complex environments. Experiences like these were impossible to provide before the advent of powerful, inexpensive server clouds, ubiquitous networking, and graphically capable end-user devices.

Replace the Didactic Classroom Experience

This is the most controversial application of learning technology – “Why do we need faculty to teach calculus on thousands of different campuses, when it can be taught online by a computer?” – but also one that drives most discussion of how technology might transform higher education. It has emerged especially for disciplines and topics where instructors convey what they know to students through classroom lectures, readings, and tutorials. PLATO (Programmed Logic for Automated Teaching Operations) emerged from the University of Illinois in the 1960s as the first major example of computers replacing teachers, and has been followed by myriad attempts, some more successful than others, to create technology-based teaching mechanisms that tailor their instruction to how quickly students master material. (PLATO’s other major innovation was partnership with a commercial vendor, the now defunct Control Data Corporation.)

Higher Education

We now come to the $64 question: what role might trends in higher-education learning technology play in the potential transformation of higher education?

The transformational goal for higher education is to carry out its social and economic roles with greater efficiency and within the resource constraints. Many believe that such transformation requires a very different structure for future higher education. What might that structure be, and what role might information technologies play in its development?

The fundamental purpose of higher education is to advance society, polity, and the economy by increasing the social, political, and economic skills and knowledge of students – what economists call “human capital“. At the postsecondary level, education potentially augments students’ human capital four ways:

  • admission, which is to say declaring that a student has been chosen as somehow better qualified or more adaptable in some sense than other prospective students (this is part of Lester Thurow‘s “job queue” idea);
  • instruction, including core and disciplinary curricula, the essentially unidirectional transmission of concrete knowledge through lectures, readings, and like, and also the explication and amplification of that through classroom, tutorial, and extracurricular guidance and discussion (this is what we often mean by the narrow term “teaching”);
  • certification, specifically the measuring of knowledge and skill through testing and other forms of assessment; and
  • socialization, specifically learning how to become an effective member of society independently of one’s origin family, through interaction with faculty and especially with other students.

Sometimes a student gets all four together. For example, MIT marked me even before I enrolled as someone likely to play a role in technology (admission), taught me a great deal about science and engineering generally, electrical engineering in particular, and their social and economic context (instruction), documented through grades based on exams, lab work, and classroom participation that I had mastered (or failed to master) what I’d been taught (certification), and immersed me in an environment wherein data-based argument and rhetoric guided and advanced organizational life, and thereby helped me understand how to work effectively within organizations, groups, and society (socialization).

Most students attend college whose admissions processes amount to open admission, or involve simple norms rather than competition.  That is, anyone who meets certain standards, such as high-school completion with a given GPA or test score, is admitted. In 2010, almost half of all institutions reporting having no admissions criteria, and barely 11% accepted fewer than 1/4 of their applicants. Moreover, most students do not live on campus — in 2007-08, only 14% of undergraduates lived in college-owned housing. This means that most of higher education has limited admission and socialization effects. Therefore, for the most part higher education affects human capital through instruction and certification.

Instruction is an especially fertile domain for technological progress. This is because three trends converge around it:

  • ubiquitous connectivity, especially from students’ homes;
  • the rapidly growing corpus of coursework offered online, either as formal credit-bearing classes or as freestanding materials from entities like OCW or Khan; and
  • perhaps more speculative) the growing willingness of institutions to grant credit and allow students to satisfy requirements through classes taken at other institutions or through some kind of testing or assessment.

Indeed, we can imagine a future where it becomes commonplace for students to satisfy one institution’s degree requirements with coursework from many other institutions. Further down this road, we can imagine there might be institutions that admit students, prescribe curriculum, certify progress, and grant degrees – but have no instructional faculty and do not offer courses. This, in turn, might spawn purely instructional institutions.

One problem with such a future is that socialization, a key function of higher education, gets lost. This points the way to one major technology challenge for the future: Developing online mechanisms, for students who are scattered across the nation or the world, that provide something akin to rich classroom and campus interaction. Such interaction is central to the success of, for example, elite liberal-arts colleges and major residential universities. Many advocates of distance education believe that social media such as Facebook groups can provide this socialization, but that potential has yet to be realized.

A second problem with such a future is that robust, flexible methods for assessing student learning at a distance remain either expensive or insufficient. For example, ProctorU and Kryterion are two of several commercial entities that provide remote exam proctoring, but they do so through somewhat intensive use of video observation, and that only works for rather traditional written exams. For another example, in the aftermath of 9/11 many universities figured out how to conduct doctoral thesis defenses using high-bandwidth videoconferencing facilities rather than flying in faculty from other institutions, but this simply reduced travel expense rather than changed the basic idea that several faculty members would examine one student at a time.


If learning technologies are to transform higher education, we must exploit opportunities and address problems. At the same time, transformed higher education cannot neglect important dimensions of human capital. In that respect, our goal should be not only to make higher education more efficient than it is today, but also better.

Drivers headed for Millinocket rarely pull over any more to ask directions of drawling downeasters. Instead, they rely on the geographic position and information systems built into their cars or phones or computers, which in turn rely on network connectivity to keep maps and traffic reports up to date. To be sure, reliance on GPS and GIS tends to insulate drivers from interaction with the diversity they pass along the road, much as Interstate highways standardized cross-country travel. So the gain from those applications is not without cost.

The same is true for learning technology: it will yield both gains and losses. Effective progress will result only if we explore and understand the technologies and their applications, decide how these relate to the structure and goals of higher education, identify obstacles and remedies, and figure out how to get there from here.

IT Demography in Higher Education: Some Reminiscence & Speculation

In oversimplified caricature, many colleges and universities have traditionally staffed the line, management, and leadership layers of their IT enterprise thus:

Students with some affinity for technology (perhaps their major, perhaps work-study, perhaps just a side interest) have approached graduation not quite sure what they should do next. They’ve had some contact with the institution’s IT organizations, perhaps having worked for some part of them or perhaps having criticized their services. Whatever the reason, working for an institutional IT organization has seemed a useful way to pay the rent while figuring out what to do next, and it’s been a good deal for the IT organizations because recent graduates are usually pretty clever, know the institution well, learn fast, and are willing to work hard for relatively meager pay.

Moreover, and partly compensating for low pay, the technologies being used and considered in higher education often have been more advanced than those out in business, so sticking around has been a good way to be at the cutting edge technologically, and college and universities have tended to value and reward autonomy, curiosity, and creativity.

Within four or five years of graduation, most staff who come straight into the IT organization have figured out that it’s time to move on. Sometimes a romantic relationship has turned their attention to life plans and long-term earnings, sometimes ambition has taken more focused shape and so they seek a steeper career path, sometimes their interests have sharpened and readied them for graduate school — but in any case, they have left the campus IT organization for other pastures after a few good, productive years, and have been replaced by a new crop of recent graduates.

But a few individuals have found that working in higher education suits their particular hierarchy of needs (to adapt and somewhat distort Maslow). For them, IT work in higher education has yielded several desiderata (remember I’m still caricaturing here): there’s been job security, a stimulating academic environment, a relatively flat organization that offers considerable responsibility and flexibility, and an opportunity to work with and across state-of-the-art (and sometimes even more advanced) technologies. Benefits have been pretty good, even though pay hasn’t and there have been no stock options. Individuals to whom this mix appeals have stayed in campus IT, rising to middle-management levels, sometimes getting degrees in the process, and sometimes, as they have moved into #3 or #2 positions, even moving to other campuses as opportunities present themselves.

Higher-education IT leaders — that is, CIOs, the heads of major decentralized IT organizations, and in some cases the #2s within large central organizations — typically have come from one of two sources. Some have come from within higher-education IT organizations, sometimes the institution’s own but more typically, since a given institution usually has more leadership-ready middle managers than it has available leadership positions, another institution’s. (Whereas insiders once tended to be heavy-metal computer-center directors,  more recently they have come from academic technologies or networking.) Other leaders have come from faculty ranks, often (but not exclusively) in computer science or other technically-oriented disciplines. Occasionally some come from other sources, such as consulting firms or even technology vendors, or even from administration elsewhere in higher education.

The traditional approach staffs IT organizations with well educated, generally clever individuals highly attuned to the institution’s culture and needs. They are willing and able to tackle complex IT projects involving messy integration among different technologies. Those individuals also cost less that comparable ones would if hired from outside. Expected turnover among line staff notwithstanding, they are loyal to the institution even in the face of financial and management challenges.

But the traditional model also tilts IT organizations toward idiosyncrasy and patchwork rather than coherent architecture and efficiency-driven implementation. It often works against the adoption of effective management techniques, and it can promote hostility toward businesslike approaches to procurement and integration and indeed the entire commercial IT marketplace. All of this has been known, but in general institutions have continued to believe that the advantages of the traditional model outweigh its shortcomings.

I saw Moneyball in early October. I liked it mostly because it’s highly entertaining, it’s a good story, it’s well written, acted, directed, and produced, and it involves both applied statistical analysis (which is my training) and baseball (my son’s passion, and mine when the Red Sox are in the playoffs). I also liked it because its focus — dramatic change in how one staffs baseball teams — led me to think about college and university IT staffing. (And yes, I know my principles list says that “all sports analogies mislead”, but never mind.)

In one early scene, the Oakland A’s scouting staff explains to Brad Pitt’s character, Billy Beane, that choosing players depends on intuition honed by decades of experience with how the game is played, and that the approach Beane is proposing — choosing them based on how games are won rather than on intuition — is dangerous and foolhardy. Later, Arliss Howard’s character, the Red Sox owner John Henry, explains that whenever one goes against long tradition all hell breaks loose, and whoever pioneers or even advocates that change is likely to get bloodied.

So now I’ll move from oversimplification and caricature to speculation. To believe in the continued validity of the traditional staffing model may be to emulate the scouts in Moneyball. But to abandon the model is risky, since it’s not clear how higher-education IT can maintain its viability in a more “businesslike” model based on externally defined architectures, service models, and metrics. After all, Billy Beane’s Oakland A’s still haven’t won the World Series.

The Beane-like critique of the traditional model isn’t that the advantage/shortcoming balance has shifted, but rather that it depends on several key assumptions whose future validity is questionable. To cite four interrelated ones:

  • With the increasing sophistication of mobile devices and cloud-based services, the locus of technological innovation has shifted away from colleges and universities. Recent graduates who want to be in the thick of things while figuring out their life plans have much better options than staying on campus — they can intern at big technology firms, or join startups, or even start their own small businesses. In short, there is now competition for young graduates interested IT but unsure of their long-term plans.
  • As campuses have outsourced or standardized much of their IT, jobs that once included development and integration responsibility have evolved into operations, support, and maintenance — which are important, but not very interesting intellectually, and which provide little career development.  Increased outsourcing has exacerbated this, and so has increased reliance on business-based metrics for things like user support and business-based architectures for things like authentication and systems integration.
  • College and university IT departments could once offset this intellectual narrowing because technology prices were dropping faster than available funds, and the resulting financial cushion could be dedicated to providing staff with resources and flexibility to go beyond their specific jobs (okay, maybe what I mean is letting staff buy gadgets and play with them). But tightened attention to productivity and resource constraints have large eliminated the offsetting toys and flexibility. So IT jobs in colleges and universities have lost much of their nonpecuniary attractiveness, without any commensurate increase in compensation. Because of this, line staff are less likely to choose careers in college or university IT, and without this source of replenishment the higher-education IT management layer is aging.
  • As IT has become pervasively important to higher education, so responsibility for its strategic direction has broadened. As strategic direction has broadened, so senior leadership jobs, including the CIO’s, have evolved away from hierarchical control and toward collaboration and influence. (I’ve written about this elsewhere.) At the same time, increasing attention to business-like norms and metrics has required that IT leaders possess a somewhat different skillset than usually emerges from gradual promotion within college and university IT organizations or faculty experience. This has disrupted the supply chain for college and university IT leadership, as a highly fragmented group of headhunter firms competes to identify and recruit nontraditional candidates.

I think we’re already seeing dramatic change resulting from all this. The most obvious change is rapid standardization around commercial standards to enable outsourcing — which is appealing not only intrinsically, but because it reduces dependence on an institution’s own staff. (On the minus side, it also tends to emphasize proprietary commercial rather than open-source or open-standards approaches.) I also sense much greater interest in hiring from outside higher education, both at the line and management levels, and a concomitant reappraisal of compensation levels. That, combined with flat or shrinking resources, is eliminating positions, and the elimination of positions is promoting even more rapid standardization and outsourcing.

On the plus side, this is making college and university IT departments much more efficient and businesslike. On the minus side, higher education IT organizations may be losing their ability to innovate. This is yet another instance of the difficult choice facing us in higher-education IT: Is IT simply an important, central element of educational, research, and administrative infrastructure, or is IT also the vehicle for fundamental change in how higher education works? (In Moneyball, the choice is between player recruitment as a mechanism for generating runs, and as a mechanism for exciting fans. Sure, Red Sox fans want to win. But were they more avid before or after the Curse ended with Bill James’s help?)

If it’s the latter, we need to make sure we’re equipped to enable that — something that neither the traditional model nor the evolving “businesslike” model really does.




IT and Post-Institutional Higher Education: Will We Still Need Brad When He’s 54?

“There are two possible solutions,” Hercule Poirot says to the assembled suspects in Murder on the Orient Express (that’s p. 304 in the Kindle edition, but the 1974 movie starring Albert Finney is way better than the book, and it and the book are both much better than the abominable 2011 PBS version with David Suchet). “I shall put them both before you,” Poirot continues, “…to judge which solution is the right one.”

So it is for the future role, organization, and leadership of higher-education IT. There are two possible solutions. There’s a reasonably straightforward projection how the role of IT in higher education will evolve into the mid-range future, but there’s also a more complicated one. The first assumes institutional continuity and evolutionary change. The second doesn’t.

IT Domains

How does IT serve higher education? Let me count the ways:

  1. Infrastructure for the transfer and storage of pedagogical, bibliographic, research, operational, and administrative information, in close synergy with other physical infrastructure such as plumbing, wiring, buildings, sensors, controls, roads, and vehicles. This includes not only hardware such as processors, storage, networking, and end-user devices, but also basic functionality such as database management and hosting (or virtualizing) servers.
  2. Administrative systems that manage, analyze, and display the information students, faculty, and staff need to manage their own work and that of their departments. This includes identity management, authentication, and other so-called “middleware” through which institutions define their communities.
  3. Pedagogical applications students and faculty need to enable teaching and learning, including tools for data analysis, bibliography, simulation, writing, multimedia, presentations, discussion, and guidance.
  4. Research tools faculty and students need to advance knowledge, including some tools that also serve pedagogy plus a broad array of devices and systems to measure, gather, simulate, manage, share, distill, analyze, display, and otherwise bring data to bear on scholarly questions.
  5. Community services to support interaction and collaboration, including systems for messaging, collaboration, broadcasting, and socialization both within campuses and across their boundaries.

“…A Suit of Wagon Lit Uniform…and a Pass Key…”

The straightforward projection, analogous to Poirot’s simpler solution (an unknown stranger committed the crime, and escaped undetected), stems from projections how institutions themselves might address each of the IT domains as new services and devices become available, especially cloud-based services and consumer-based end-user devices. The core assumptions are that the important loci of decisions are intra-institutional, and that institutions make their own choices to maximize local benefit (or, in the economic terms I mentioned in an earlier post, to maximize their individual utility.)

Most current thinking in this vein goes something like this:

  • We will outsource generic services, platforms, and storage, and perhaps
  • consolidate and standardize support for core applications and
  • leave users on their own insofar as commercial devices such as phones and tablets are concerned, but
  • we must for the foreseeable future continue to have administrative systems securely dedicated and configured for our unique institutional needs, and similarly
  • we must maintain control over our pedagogical applications and research tools since they help distinguish us from the competition.

Evolution based on this thinking entails dramatic shrinkage in data-center facilities, as virtualized servers housed in or provided by commercial or collective entities replace campus-based hosting of major systems. It entails several key administrative and community-service systems being replaced by standard commercial offerings — for example, the replacement of expense-reimbursement systems by commercial products such as Concur, of dedicated payroll systems by commercial services such as ADP, and of campus messaging, calendaring, and even document-management systems by more general services such as Google’s or Microsoft’s. Finally, thinking like this typically drives consolidation and standardization of user support, bringing departmental support entities into alignment if not under the authority of central IT, and standardizing requirements and services to reduce response times and staff costs.

How might higher-education IT evolve if this is how things go? In particular, what effects would it have on IT organization, and leadership?

One clear consequence of such straightforward evolution is a continuing need for central guidance and management across essentially the current array of IT domains. As I tried to suggest in a recent article, the nature of that guidance and management would change, in that control would give way to collaboration and influence. But institutions would retain responsibility for IT functions, and it would remain important for important systems to be managed or procured centrally for the general good. Although the skills required of the “chief information officer” would be different, CIOs would still be necessary, and most cross-institutional efforts would be mediated through them. Many of those cross-institutional efforts would involve coordinated action of various kinds, ranging from similar approaches to vendors through collective procurement to joint development.

We’d still need Brads.

“Say What You Like, Trial by Jury is a Sound System…”

If we think about the future unconventionally (as Poirot does in his second solution — spoiler in the last section below!), a somewhat more radical, extra-institutional projection emerges. What if Accenture, McKinsey, and Bain are right, and IT contributes very little to the distinctiveness of institutions — in which case colleges and universities have no business doing IT idiosyncratically or even individually?

In that case,

  • we will outsource almost all IT infrastructure, applications, services, and support, either to collective enterprises or to commercial providers, and therefore
  • we will not need data centers or staff, including server administrators, programmers, and administrative-systems technical staff, so that
  • the role of institutional IT will be largely to provide only highly tailored support for research and instruction, which means that
  • in most cases means there will be little to be gained from centralizing IT,
  • it will make sense for academic departments to do their own IT, and
  • we can rely on individual business units to negotiate appropriate administrative systems and services, and so
  • the balance will shift from centralized to decentralized IT organization and staffing.

What if we’re right that mobility, broadband, cloud services, and distance learning are maturing to the point where they can transform education, so that we have simultaneous and similarly radical change on the academic front?

Despite changes in technology and economics, and some organizational evolution, higher education remains largely hierarchical. Vertically-organized colleges and universities grant degrees based on curricula largely determined internally, curricula largely comprise courses offered by the institution, institutions hire their own faculty to teach their own courses, and students enroll as degree candidates in a particular institution to take the courses that institution offers and thereby earn degrees. As Jim March used to point out, higher education today (well, okay, twenty years ago, when I worked with him at Stanford) is pretty similar to its origins: groups sitting around on rocks talking about books they’ve read.

It’s never been that simple, of course. Most students take some of their coursework from other institutions, some transfer from one to another, and since the 1960s there have been examples of network-based teaching. But the model has been remarkably robust across time and borders. It depends critically on the metaphor of the “campus”, the idea that students will be in one place for their studies.

Mobility, broadband, and the cloud redefine “campus” in ways that call the entire model into question, and thereby may transform higher education. A series of challenges lies ahead on this path. If we tackle and overcome these challenges, higher education, perhaps even including its role in research, could change in very fundamental ways.

The first challenge, which is already being widely addressed in colleges, universities, and other entities, is distance education: how to deliver instruction and promote learning effectively at a distance. Some efforts to address this challenge involve extrapolating from current models (many community colleges, “laptop colleges”, and for-profit institutions are examples of this), some involve recycling existing materials (Open CourseWare, and to a large extent the Khan Academy), and some involve experimenting with radically different approaches such as game-based simulation. There has already been considerable success with effective distance education, and more seems likely in the near future.

As it becomes feasible to teach and learn at a distance, so that students can be “located” on several “campuses” at once, students will have no reason to take all their coursework from a single institution. A question arises: If coursework comes from different “campuses”, who defines curriculum? Standardizing curriculum, as is already done in some professional graduate programs, is one way to achieve address this problem — that is, we may define curriculum extra-institutionally, “above the campus”. Such standardization requires cross-institutional collaboration, oversight from professional associations or guilds, and/or government regulation. None of this works very well today, in part because such standardization threatens institutional autonomy and distinctiveness. But effective distance teaching and learning may impel change.

As courses relate to curricula without depending on a particular institution, it becomes possible to imagine divorcing the offering of courses from the awarding of degrees. In this radical, no-longer-vertical future, some institutions might simply sell instruction and other learning resources, while others might concentrate on admitting students to candidacy, vetting their choices of and progress through coursework offered by other institutions, and awarding degrees. (Of course, some might try to continue both instructing and certifying.) To manage all this, it will clearly be necessary to gather, hold, and appraise student records in some shared or central fashion.

To the extent this projection is valid, not only does the role of IT within institutions change, but the very role of institutions in higher education changes. It remains important that local support be available to support the IT components of distinctive coursework, and of course to support research, but almost everything else — administrative and community services, infrastructure, general support — becomes either so standardized and/or outsourced as to require no institutional support, or becomes an activity for higher education generally rather than colleges or universities individually. In the extreme case, the typical institution really doesn’t need a central IT organization.

In this scenario, individual colleges and universities don’t need Brads.

“…What Should We Tell the Yugo-Slavian Police?”

Poirot’s second solution to the Ratchett murder (everyone including the butler did it) requires astonishing and improbable synchronicity among a large number of widely dispersed individuals. That’s fine for a mystery novel, but rarely works out in real life.

I therefore don’t suggest that the radical scenario I sketched above will come to pass. As many scholars of higher education have pointed out, colleges and universities are organized and designed to resist change. So long as society entrusts higher education to colleges and universities and other entities like them, we are likely to see evolutionary rather than radical change. So my extreme scenario, perhaps absurd on its face, seeks to only to suggest that we would do well to think well beyond institutional boundaries as we promote IT in higher education and consider its transformative potential.

And more: if we’re serious about the potentially transformative role of mobility, broadband, and the cloud in higher education, we need to consider not only what IT might change but also what effects that change will have on IT itself — and especially on its role within colleges and universities and across higher education.

GoTo, Gas Pedals, & Google: What Students Should Know, and Why That’s Not What We Teach Them

In the 1980s I began teaching a course in BASIC programming in the Harvard University Extension, part of an evening Certificate of Advanced Study program for working students trying to get ahead. Much to my surprise, students immediately filled the small assigned lecture hall to overflowing, and nearly overwhelmed my lone teaching assistant.

Within two years, the course had grown to 250+ students. They spread throughout the second-largest room in the Harvard Science Center (Lecture Hall C– the one with the orange seats, for those of you who have been there). I now had a dozen TAs, so I was in effect not only teaching the BASIC course, but also leading a seminar on the pedagogical challenge of teaching non-technical students how to write structured programs in a language that heretically allowed “GoTo” statements.

Computer Literacy?

There’s nothing very interesting or exciting about learning to program in BASIC. Although I flatter myself a good teacher, even my best efforts to render the material engaging – for example, assignments that variously involved having students act out various roles in Stuart Madnick’s deceptively simple Little Man Computer system, automating Shirley Ellis‘s song The Name Game, and modeling a defined-benefit pension system – in no way explained the course’s popularity.

So what was going on? I asked students why they were taking my course. Most often, they said something about “computer literacy”. That’s a useful (if linguistically confused) term, but in this case a misleading one.

If the computer becomes important, the analogy seems to run, then the ability to use a computer becomes important, much as the spread of printed material made reading and writing important. So far so good. For the typical 1980s employee, however, using computers in business and education centered on applications like word processors, spreadsheets, charts, and maybe statistical packages. Except for those within the computer industry, it rarely involved writing code in high-level languages.

BASIC programming thus had little direct relevance to the “computer literacy” students actually needed. The print era made reading and writing important  for the average worker and citizen. But only printers needed adeptness with the technologies of paper, ink, composition (in the Linotype sense), and presses. That’s why the analogy fails: programming, by the 1980s, was about making computer applications, not using them. That’s the opposite of what students actually needed.

Yet clearly students viewed the ability to program in BASIC – even “Shirley Shirley bo-birley…” – as somehow relevant to the evolving challenges of their jobs. If BASIC programming wasn’t directly relevant to actual computer literacy, why did they believe this? Two explanations of its indirect importance suggest themselves:

  • Perhaps ability to program was an accessible indicator of more relevant yet harder-to-measure competence. Employers might have been using programming ability, however irrelevant directly, as a shortcut measure to evaluate and sort job applicants or promotion candidates. (This is essentially a recasting of Lester Thurow‘s “job queues” theory about the relationship between educational attainment and hiring, namely that educational attainment signals the ability to learn quickly rather than provides direct training.) Applicants or employees who believed this was happening would thus perceive programming ability as a way to make themselves appear attractive, even though the skill was actually irrelevant.
  • Perhaps students learned to program simply to gain confidence that they could cope with the computer age.

I propose a third explanation:

  • As technology evolves, generations that experience the evolution tend to believe it important for the next generation to understand what came before, and advise students accordingly.

That is, we who experience technological change believe that competence with current technology benefits from understanding prior technology – a technological variant of George Santayana’s aphorism “Those who cannot remember the past are condemned to repeat it” – and send myriad direct and indirect messages to our successors and students that without historical understanding one cannot be fully competent.

Shifting Gears

My father taught me to drive on the family’s 1955 Chevy station wagon, a six-cylinder car with a three-speed, non-synchromesh, stalk-mounted-shifter manual transmission and power nothing. After a few rough sessions learning to get the car moving without bucking and stalling, to turn and shift at the same time, and to double-clutch and downshift while going downhill, I became a pretty good driver.

But my father, who had learned to drive on a Model T Ford with a planetary transmission and separate throttle and spark-advance controls, remained skeptical of my ability. He was always convinced that since I didn’t understand that latter distinction, I really wasn’t operating the car as well as I might. (Today’s “accelerator”, if I understand it correctly, combines the two functions: it tells the engine to spin faster, which is what the spark-advance lever did, and then feeds it the necessary fuel mixture, which was the throttle’s function.)

Years later it came time for our son’s first driving lesson. We were in our automatic-transmission Toyota Camry, equipped with power steering and brakes, on a not-yet-opened Cape Cod subdivision’s newly paved streets. Apparently forgetting how irrelevant the throttle/spark distinction had been to my learning to drive, I delivered a lecture on what was going on in the automatic transmission – why it didn’t need a clutch, how it was deciding when to shift gears, and so forth. Our son listened patiently, and then rapidly learned to drive the Camry very well without any regard to what I’d explained. My lecture had absolutely no effect on his competence (at least not until several years later, I like to believe, when he taught himself to drive a friend’s four-in-the-floor VW).

Technological Instruction

Which brings me to the present, and the challenge of preparing today’s students for tomorrow’s technological workplaces. What should be our advice to them be, either explicitly – in the form of direct instruction or requirements – or implicitly, in the form of contextual guidance such as induced so many students to take my BASIC course? In particular, how can we break away from the generational tendency to emphasize how we got here rather than where we’re going?

I don’t propose to answer that question fully here, but rather to sketch, though two examples, how a future-oriented perspective might differ from a generational one. The first example is cloud services, and the second example is online information.

Cloud Services

I started writing this essay on my DC office computer. I’m typing these words on an old laptop I keep in my DC apartment, and I’ll probably finish it on my traveling computer or perhaps on my Chicago home computer. A big problem ensues: How do I keep these various copies synchronized? My answer is a service called Dropbox, which copies documents I save to its central servers and then disseminates them automatically to all my other computers and even my phone. What I need is to have the same document up to date wherever I’m working. Dropbox achieves this by synchronizing multiple copies of the same documents across multiple computers and other devices.

Alternatively, I might gotten what I need– having the same document up to date wherever I’m working– by drafting this post as a Google or Live document. Rather than editing different synchronized copies of the document, I’d actually have been editing the same remote document from different computers rather than synchronizing local copies among those computers.

My instincts are that this difference between synchronized and remote documents is important, something that I, as an educator, should be sure the next generation understands. When my son asks about how to work across different machines, my inclination is to explain the difference between the options, how one is giving way to the other, and so forth. Is that valid, or is this the same generational fallacy that led my father to explain throttles and spark advance or me to explain clutches and shifting?

Online Information

When I came to the history quote above, I couldn’t remember its precise wording or who wrote it. That’s what the Internet is for, right? Finding information?

I typed “those who ignore the past are doomed”, which was the partial phrase I remembered, into Google’s search box. Among the first page of hits, the first time I tried this, were links to,,,, and The first four of those pointed me to the correct quote, usually giving the specific source including edition and page. The last, from a departmental web page at Northern Kentucky University, blithely repeated the incorrect quote (but at least ascribed it to Santayana). One of the sources ( pointed to an earlier, similar quote from Edmund Burke. The Wikipedia entry reminded me that the quote is often incorrectly ascribed to Plato.

I then typed the same search into Bing’s search box. Many links on its first page of results were the same as Google’s — and wikiquotes — but there were more links to political comments (most of them embodying incorrect variations on the quote), and one link to a conspiracy-theorist page linking the Santayana quote to George Orwell’s “He who controls the present, controls the past. He who controls the past, controls the future”.

It wasn’t hard for me to figure out which search results to heed and which to ignore. The ability to screen search results and then either to choose which to trust or to refine the search is central to success in today’s networked world. What’s the best way to inculcate that skill in those who will need it?

I’ve been working in IT since before the Digital Equipment Corporation‘s Altavista, in its original incarnation, became the first Web search engine. The methods different search services use to locate and rank information have always been especially interesting. The early Altavista ranked pages based on how many times search words appeared in them – a method so obviously manipulable (especially by sneaking keywords into non-displayed parts of Web pages) that it rapidly gave way to more robust approaches. The links one gets from Google or Bing today come partly from some very sophisticated ranking said to be based partly on user behavior (such as whether a search seems to have succeeded) and partly on links among sites (this was Google’s original innovation, called PageRank) – but also, quite openly and separately, from advertisers paying to have their sites displayed when users search for particular terms.

Here again the generational issue arises. Obviously we want to teach future generations how to search effectively, and how to evaluate the quality and reliability of the information their searches yield. But do we do this by explaining the evolution of search and ranking algorithms – the generational approach based on the preceding paragraph – or by teaching more generally, as bibliographic instructors in libraries have long done, how to cross-reference, assess, and evaluate information whatever its form?

Understanding throttles and spark advance did not help me become a better driver, understanding BASIC probably didn’t help prepare my Harvard students for their future workplaces, and explaining diverse cloud mechanisms and search algorithms isn’t the best way for us to maximize our students’ technological competence. Much as I love explaining things, I think the essence of successful technological teaching is to focus on the future, on the application and consequences of technology rather than its origins.

That doesn’t mean that we should eschew the importance of history, but rather than history does not suffice as a basis for technological instruction. It’s easier to explain the past than to anticipate the future, but that last, however risky and uncertain and detached from our personal histories, is our job.

The Era of Control: It’s Over

Remember Attack Plan R? That’s the one that enabled Brigadier General Ripper to bypass General Turgidson and the rest of the Air Force chain of command, sending his bombers on an unauthorized mission to attack the Soviet Union. As a result of Plan R and two missed communications – the Soviets’ failure to announce the Doomsday Device and an encrypted radio’s failure to recall Major Kong’s B-52 – General Ripper ended the world and protected our precious bodily fluids, Colonel Mandrake’s best efforts notwithstanding.

Fortunately, we’re talking about Sterling Hayden, George C. Scott, Slim Pickens, and Peter Sellers in Stanley Kubrick’s Dr. Strangelove, or How I Learned to Stop Worrying and Love the Bomb, and not an actual event. But it’s a classic illustration why control of key technologies is important, and it’s the introduction to my theme for today: we’re losing control of key technologies, which is important and has implications for how campus IT leaders do their jobs. We can afford to lose control; we can’t afford to lose influence.

Here’s how IT was on most campuses until about ten years ago:

  • There were a bunch of central applications: a financial system, a student-registration system, some web servers, maybe an instructional-management system, and then some more basic central facilities such as email and shared file storage.
  • Those applications and services were all managed by college or university employees – usually in the central IT shop, but sometimes in academic or administrative units – and they ran on mainframes or servers owned and operated likewise.
  • Students, faculty, and staff used the central applications from desktop and laptop computers. Amazing as it seems today, students often obtained their computers from the campus computer reseller or bookstore, or at least followed the institution’s advice if they bought them elsewhere. Faculty and staff computers were very typically purchased by the college or university and assigned to users. In all three cases, a great deal of the software on users’ computers was procured, configured, and/or installed by college or university staff.
  • Personal computers and workstations connected to central applications and services over the campus network. The campus network, mostly wired but increasingly wireless, was provided and managed by the central IT organization, or in some cases by academic units.
  • Campus telephony consisted primarily of office, dormitory, and “public” telephone sets connected to a campus telephone exchange, which was either operated by campus staff or operated by an outside vendor under contract to the campus.

Notice the dominant feature of that list: all the key elements of campus information technology were provided, controlled, or at least strongly guided by the college or university, either through its central IT organization or through academic or administrative units.

There were exceptions to campus control over information technology, to be sure. The World Wide Web was already a major source of academic information. Although some academic material on the web came from campus servers, most of it didn’t. There were also many services available over the web, but most campus use of these was for personal rather than campus purposes (banking, for example, and travel planning). Most campus libraries relied heavily on outside services, some accessible through the Web and some through other client/server mechanisms. And many researchers, especially in physics and other computationally-intensive disciplines, routinely used shared computational resources located in research centers or on other campuses. Regardless,  a decade ago central or departmental campus IT organizations controlled most campus IT.

Now fast forward to the present, when things are quite different. The first difference is complexity. For example, I described my own IT environment in a post a few weeks ago:

I started typing this using Windows Live Writer (Microsoft) on the Windows 7 (Microsoft, but a different part) computer (Dell) that is connected to the network (Comcast, Motorola, Cisco) in my home office, and it’ll be stored on my network hard drive (Seagate) and eventually be posted using the blog service (WordPress) on the hosting service (HostMonster) where my website and blog reside. I’ll keep track of blog readers using an analytic service (Google), and when I’m traveling I might correct typos in the post using another blog editor (BlogPress) on my iPhone (Apple) communicating over cellular (AT&T) or WiFi (could be anyone) circuits. Much of today’s IT is like this: it involves user-owned technologies like computers and phones combined in complicated ways with cloud-based services from outside providers. We’re always partly in the cloud, and partly on the ground, partly in control and partly at the mercy of providers.

Mélanges like this are pretty much the norm these days. And not just when people are working from home as I often do: the same is increasingly true at the office, for faculty and staff, and in class and dorms, for students. It’s true even of the facilities and services we might still think of as institutional: administrative systems and core services, for example.

The mélanges are complex, as I already pointed out, but their complexity goes beyond technical integration. That’s the second difference between the past and the present: not only does IT involve a lot of interrelated yet distinct pieces, but the pieces typically come from outside providers – the cloud, in the current usage. Those outside providers operate outside campus control.

Group the items I listed in my ten-years-ago list above into four categories: servers and data centers, central services and applications, personal computers and workstations, and the networks that interconnect them. Now consider how IT is on today’s campus:

  • Instead of buying new servers and housing them in campus buildings, campuses increasingly create servers within virtualization environments, house those environments in off-campus facilities perhaps provided by commercial hosting firms, and use the hosting firm’s infrastructure rather than their own.
  • A rapidly growing fraction of central services is outsourced, which means not only that servers are off campus and proprietary, but that the applications and services are being administered by outside firms.
  • Although most campuses still operate wired and wireless networks, users increasingly reach “campus” services through smartphones, tablets, web-enabled televisions, and portable computers whose default connectivity is provided by telephone companies or by commercial network providers.
  • Those smartphones, tablets, and televisions, and even many of the portable computers, are chosen, procured, configured, and maintained individual users, not by the institution.

This migration from institutional to external or personal control is what we mean by “cloud services”. That is, the key change is not the location of servers, the architecture of applications, the management of networks, or the procurement of personal computers. Rather, it is our willingness to give up authority over IT resources, to trade control for economy of scale and cost containment. It’s Attack Plan R.

The tradeoff has at least four important consequences. Three of these have received widespread attention, and I won’t delve into them here: ceding control to the cloud

  • can introduce very real security, privacy, and legal risks,
  • can undercut long-established mechanisms designed to produce effective user support at minimal cost, and
  • can restructure IT costs and funding mechanisms.

I want to focus here on the fourth consequence. Ceding control to the cloud necessarily entails a fundamental shift in the role of central and departmental IT organizations. This shift requires CIOs and IT staff to change their ways if they are to continue being effective – being Rippers, Kongs, or Turgidsons won’t work any more.

Most important, cloud-driven loss of campus control over the IT environment means that organizational and management models based on ownership, faith, authority, and hierarchy – however benevolent, inclusive, and open – will give way to models based on persuasion, negotiation, contracting, and assessment. It will become relatively less important for CIOs to be skilled at managing large organizations, and more important for them to know how to define, specify, and measure costs and results and to negotiate intramural agreements and extramural contracts consistently. Cost-effective use of cloud services also requires standardization, since nothing drives costs up and vendors away quite so quickly as idiosyncrasy. Migration to the cloud thus requires that CIOs understand emerging standards, especially for database schemas, security models, virtualization, system interfaces, and on and on. It requires that CIOs understand that strength lies in numbers – especially in campuses banding together to procure services rather than in departures from the norm, however innovative.

Almost as important, much of what campuses now achieve through regulation they will need to achieve through persuasion – policy will give way to pedagogy as the dominant mechanism for guiding users and units. I serve on the operations advisory committee for a federal agency. To ensure data and program integrity, the agency’s IT organization wanted to revise policies to regulate staff use of personal computers and small mobile devices to do their work from home or while traveling. It became rapidly clear, as the advisory committee reacted to this, that although it was perfectly possible to write policies governing the use of personal and mobile devices, most of those policies would be ignored unless the agency mounted a major educational campaign. Then it became clear that if there were a major educational campaign, this would minimize the need for policy changes. This is the kind of transition we can imagine in higher education: we need to help users to do the right thing, not just tell them.

On occasion I’ve described my core management approaches as “bribery” and “conspiracy”. This meant that as CIO my job was was to make what was best for the university also be what was most appealing to individuals (that’s “bribery”), and that figuring out what was best for the university required discussion, collaboration, and agreement among a broad array of IT and non-IT campus leaders (“conspiracy”). As control gives way to the cloud for campus IT, these two approaches become equally relevant to vendor relations and inter-institutional joint action. We who provide information technology to higher education need to work together to ensure that as we lose control over IT resources we don’t also lose influence.