Posts Tagged ‘mobility’

The Importance of Being Enterprise

…as Oscar Wilde well might have titled an essay about campus-wide IT, had there been such a thing back then.

Enterprise IT it accounts for the lion’s share of campus IT staffing, expenditure, and risk. Yet it receives curiously little attention in national discussion of IT’s strategic higher-education role. Perhaps that should change. Two questions arise:

  • What does “Enterprise” mean within higher-education IT?
  • Why might the importance of Enterprise IT evolve?

What does “Enterprise IT” mean?

Here are some higher-education spending data from the federal Integrated Postsecondary Education Data Service (IPEDS), omitting hospitals, auxiliaries, and the like:

Broadly speaking, colleges and universities deploy resources with goals and purposes that relate to their substantive mission or the underlying instrumental infrastructure and administration.

  • Substantive purposes and goals comprise some combination of education, research, and community service. These correspond to the bottom three categories in the IPEDS graph above. Few institutions focus predominantly on research—Rockefeller University, for example. Most research universities pursue all three missions, most community colleges emphasize the first and third, and most liberal-arts colleges focus on the first.
  • Instrumental activities are those that equip, organize, and administer colleges and universities for optimal progress toward their mission—the top two categories in the IPEDS graph. In some cases, core activities advance institutional mission by providing a common infrastructure for the latter. In other cases, they do it by providing campus-wide or departmental staffing, management, and processes to expedite mission-oriented work. In still other cases, they do it through collaboration with other institutions or by contracting for outside services.

Education, research, and community service all use IT substantively to some extent. This includes technologies that directly or indirectly serve teaching and learning, technologies that directly enable research, and technologies that provide information and services to outside communities—for examples of all three, classroom technologies, learning management systems, technologies tailored to specific research data collection or analysis, research data repositories, library systems, and so forth.

Instrumental functions rely much more heavily on IT. Administrative processes rely increasingly on IT-based automation, standardization, and outsourcing. Mission-oriented IT applications share core infrastructure, services, and support. Core IT includes infrastructure such as networks and data centers, storage and computational clouds, and desktop and mobile devices; administrative systems ranging from financial, HR, student-record, and other back office systems to learning-management and library systems; and communications, messaging, collaboration, and social-media systems.

In a sense, then, there are six technology domains within college and university IT:

  • the three substantive domains (education, research, and community service), and
  • the three instrumental domains (infrastructure, administration, and communications).

Especially in the instrumental domains, “IT” includes not only technology, but also the services, support, and staffing associated with it. Each domain therefore has technology, service, support, and strategic components.

Based on this, here is a working definition: in in higher education,

“Enterprise” IT comprises the IT-related infrastructure, applications, services, and staff
whose primary institutional role is instrumental rather than substantive.

Exploring Enterprise IT, framed thus, entails focusing on technology, services, and support as they relate to campus IT infrastructure, administrative systems, and communications mechanisms, plus their strategic, management, and policy contexts.

Why Might the Importance of Enterprise IT Evolve?

Three reasons: magnitude, change, and overlap.

Magnitude

According data from EDUCAUSE’s Core Data Service (CDS) and the federal Integrated Postsecondary Data System (IPEDS), the typical college or university spends just shy of 5% of its operating budget on IT. This varies a bit across institutional types:

We lack good data breaking down IT expenditures further. However, we do have CDS data on how IT staff distribute across different IT functions. Here is a summary graph, combining education and research into “academic” (community service accounts for very little dedicated IT effort):

Thus my assertion above that Enterprise IT accounts for the lion’s share of IT staffing. Even if we omit the “Management” component, Enterprise IT comprises 60-70% of staffing including IT support, almost half without. The distribution is even more skewed for expenditure, since hardware, applications, services, and maintenance are disproportionately greater in Administration and Infrastructure.

Why, given the magnitude of Enterprise relative to other college and university IT, has it not been more prominent in strategic discussion? There are at least two explanations:

  • relatively slow change in Enterprise IT, at least compared to other IT domains (rapidly-changing domains rightly receive more attention that stable ones), and
  • overlap—if not competition—between higher-education and vendor initiatives in the Enterprise space.

Change

Enterprise IT is changing thematically, driven by mobility, cloud, and other fundamental changes in information technology. It also is changing specifically, as concrete challenges arise.

Consider, as one way to approach the former, these five thematic metamorphoses:

  • In systems and applications, maintenance is giving way to renewal. At one time colleges and universities developed their own administrative systems, equipped their own data centers, and deployed their own networks. In-house development has given way to outside products and services installed and managed on campus, and more recently to the same products and services delivered in or from the cloud.
  • In procurement and deployment, direct administration and operations are giving way to negotiation with outside providers and oversight of the resulting services. Whereas once IT staff needed to have intricate knowledge of how systems worked, today that can be less useful that effective negotiation, monitoring, and mediation.
  • In data stewardship and archiving, segregated data and systems are giving way to integrated warehouses and tools. Historical data used to remain within administrative systems. The cost of keeping them “live” became too high, and so they moved to cheaper, less flexible, and even more compartmentalized media. The plunging price of storage and the emergence of sophisticated data warehouses and business-intelligence systems reversed this. Over time, storage-based barriers to data integration have gradually fallen.
  • In management support, unidimensional reporting is giving way to multivariate analytics. Where once summary statistics emerged separately from different business domains, and drawing inferences about their interconnections required administrative experience and intuition, today connections can be made at the record level deep within integrated data warehouses. Speculating about relationships between trends is giving way to exploring the implications of documented correlations.
  • In user support, authority is giving way to persuasion. Where once users had to accept institutional choices if they wanted IT support, today they choose their own devices, expect campus IT organizations to support them, and bypass central systems if support is not forthcoming. To maintain the security and integrity of core systems, IT staff can no longer simply require that users behave appropriately; rather, they must persuade users to do so. This means that IT staff increasingly become advocates rather than controllers. The required skillsets, processes, and administrative structures have been changing accordingly.

Beyond these broad thematic changes, a fourfold confluence is about to accelerate change in Enterprise IT: major systems approaching end-of-life, the growing importance of analytics, extensive mobility supported by third parties, and the availability of affordable, capable cloud-based infrastructure, services, and applications.

Systems Approaching End-of-Life

In the mid-1990s, many colleges and universities invested heavily in administrative-systems suites, often (if inaccurately) called “Enterprise Reporting and Planning” systems or “ERP.” Here, again drawing on CDS, are implementation data on Student, Finance, and HR/Payroll systems for non-specialized colleges and universities:

The pattern of implementation varies slightly across institution types. Here, for example, are implementation dates for Finance systems across four broad college and university groups:

Although these systems have generally been updated regularly since they were implemented, they are approaching the end of their functional life. That is, although they technically can operate into the future, the functionality of turn-of-the-century administrative systems likely falls short of what institutions currently require. Such functional obsolescence typically happens after about 20 years.

The general point holds across higher education: A great many administrative systems will reach their 20-year anniversaries over the next several years.

Moreover, many commercial administrative-systems providers end support for older products, even if those products have been maintained and updated. This typically happens as new products with different functionality and/or architecture establish themselves in the market.

These two milestones—functional obsolescence and loss of vendor support—mean that many institutions will be considering restructuring or replacement of their core administrative systems over the next few years. This, in turn, means that administrative-systems stability will give way to 1990s-style uncertainty and change.

Growing Importance of Analytics

Partly as a result of mid-1990s systems replacements, institutions have accumulated extensive historical data from their operations. They have complemented and integrated these by implementing flexible data-warehousing and business-intelligence systems.

Over the past decade, the increasing availability of sophisticated data-mining tools has given new purpose to data warehouses and business-intelligence systems that have until now have largely provided simple reports. This has laid foundation for the explosive growth of analytic management approaches (if, for the present, more rhetorical than real) in colleges and universities, and in the state and federal agencies that fund and/or regulate them.

As analytics become prominent in areas ranging from administrative planning to student feedback, administrative systems need to become better integrated across organizational units and data sources. The resulting datasets need to become much more widely accessible while complying with privacy requirements. Neither of these is easy to achieve. Achieving them together is more difficult still.

Mobility Supported by Third Parties

Until about five years ago campus communications—infrastructure and services both—were largely provided and controlled by institutions. This is no longer the case.

Much networking has moved from campus-provided wired and WiFi facilities to cellular and other connectivity provided by third parties, largely because those third parties also provide the mobile end-user devices students, faculty, and staff favor.

Separately, campus-provided email and collaboration systems have given way to “free” third-party email, productivity, and social-media services funded by advertising rather than institutional revenue. That mobile devices and their networking are largely outside campus control is triggering fundamental rethinking of instruction, assessment, identity, access, and security processes. This rethinking, in turn, is triggering re-engineering of core systems.

Affordable, Capable Cloud

Colleges and universities have long owned and managed IT themselves, based on two assumptions: that campus infrastructure needs are so idiosyncratic that they can only be satisfied internally, and that campuses are more sophisticated technologically than other organizations.

Both assumptions held well into the 1990s. That has changed. “Outside” technology has caught up to and surpassed campus technology, and campuses have gradually recognized and begun to avoid the costs of idiosyncrasy.

As a result, outside services ranging from commercially hosted applications to cloud infrastructure are rapidly supplanting campus-hosted services. This has profound implications for IT staffing—both levels and skillsets.

The upshot is that Enterprise, already the largest component of higher-education IT, is entering a period of dramatic change.

Beyond change in IT, the academy itself is evolving dramatically. For example, online enrollment is becoming increasingly common. As the Sloan Foundation reports, the fraction of students taking some or all of their coursework online is increasing steadily:

This has implications not only for pedagogy and learning environments, but also for the infrastructure and applications necessary to serve remote and mobile students.

Changes in the IT and academic enterprises are one reason Enterprise IT needs more attention. A second is the panoply of entities that try to influence Enterprise IT.

Overlap

One might expect colleges and universities to have relatively consistent requirements for administrative systems, and therefore that the market for those would consist largely of a few major widely-used products. The facts are otherwise. Here are data from the recent EDUCAUSE Center for Applied Research (ECAR) research report The 2011 Enterprise Application Market in Higher Education:

The closest we come to a compact market is for learning management systems, where 94% of installed systems come from the top 5 vendors. Even in this area, however, there are 24 vendors and open-source groups. At the other extreme is web content management, where 89 active companies and groups compete and the top providers account for just over a third of the market.

One way major vendors compete under circumstances like these is by seeking entrée into the informal networks through which institutions share information and experiences. They do this, in many cases, by inviting campus CIOs or administrative-systems heads to join advisory groups or participate in vendor-sponsored conferences.

That these groups are usually more about promoting product than seeking strategic or technical advice is clear. They are typically hosted and managed by corporate marketing groups, not technical groups. In some cases the advisory groups comprise only a few members, in some cases they are quite large, and in a few cases there are various advisory tiers. CIOs from large colleges and universities are often invited to various such groups. For the most part these groups have very little effect on vendor marketing, and even less on technical architecture and direction.

So why do CIOs attend corporate advisory board meetings? The value to CIOs, aside from getting to know marketing heads, is that these groups’ meetings provide a venue for engaging enterprise issues with peers. The problem is that the number of meetings and their oddly overlapping memberships lead to scattershot conversations inevitably colored by the hosts’ marketing goals and technical choices. It is neither efficient nor effective for higher education to let vendors control discussions of Enterprise IT.

Before corporate advisory bodies became so prevalent, there were groups within higher-education IT that focused on Enterprise IT and especially on administrative systems and network infrastructure. Starting with 1950s workshops on the use of punch cards in higher education, CUMREC hosted meetings and publications focused on the business use of information technology. CAUSE emerged from CUMREC in the late 1960s, and remained focused on administrative systems. EDUCOM came into existence in the mid-1960s, and its focus evolved to complement those of CAUSE and CUMREC by addressing joint procurement, networking, academic technologies, copyright, and in general taking a broad, inclusive approach to IT. Within EDUCOM, the Net@EDU initiative focused on networking much the way CUMREC focused on business systems.

As these various groups melded into a few larger entities, especially EDUCAUSE, Enterprise IT remained a focus, but it was only one of many. Especially as the y2k challenge prompted increased attention to administrative systems and intensive communications demands prompted major investments in networking, the prominence of Enterprise IT issues in collective work diffused further. Internet2 became the focal point for networking engagements, and corporate advisory groups became the focal point for administrative-systems engagements. More recently, entities such as Gartner, the Chronicle of Higher Education, and edu1world have tried to become influential in the Enterprise IT space.

The results of the overlap among vendor groups and associations, unfortunately, are scattershot attention and dissipated energy in the higher-education Enterprise IT space. Neither serves higher education well. Overlap thus joins accelerated change as a major argument for refocusing and reenergizing Enterprise IT.

The Importance of Enterprise IT

Enterprise IT, through its emphasis on core institutional activities, is central to the success of higher education. Yet the community’s work in the domain has yet to coalesce into an effective whole. Perhaps this is because we have been extremely respectful of divergent traditions, communities, and past achievements.

We must not be disrespectful, but it is time to change this: to focus explicitly on what Enterprise IT needs in order to continue advancing higher education, to recognize its strategic importance, and to restore its prominence.

9/25/12 gj-a  

The Rock, and The Hard Place

Looking into the near-term future—say, between now and 2020—we in higher education have to address two big challenges, both involving IT. Neither admits easy progress. But if we don’t address them, we’ll find ourselves caught between a rock and a hard place.

  • The first challenge, the rock, is to deliver high-quality, effective e-learning and curriculum at scale. We know how to do part of that, but key pieces are missing, and it’s not clear how will find them.
  • The second challenge, the hard place, is to recognize that enterprise cloud services and personal devices will make campus-based IT operations the last rather than the first resort. This means everything about our IT base, from infrastructure through support, will be changing just as we need to rely on it.

“But wait,” I can hear my generation of IT leaders (and maybe the next) say, “aren’t we already meeting those challenges?”

If we compare today’s e-learning and enterprise IT with that of the recent past, those leaders might rightly suggest, immense change is evident:

  • Learning management systems, electronic reserves, video jukeboxes, collaboration environments, streamed and recorded video lectures, online tutors—none were common even in 2000, and they’re commonplace today.
  • Commercial administrative systems, virtualized servers, corporate-style email, web front ends—ditto.

That’s progress and achievement we all recognize, applaud, and celebrate. But that progress and achievement overcame past challenges. We can’t rest on our laurels.

We’re not yet meeting the two broad future challenges, I believe, because in each case fundamental and hard-to-predict change lies ahead. The progress we’ve made so far, however progressive and effective, won’t steer us between the rock of e-learning and the hard place of enterprise IT.

The fundamental change that lies ahead for e-learning
is the the transition from campus-based to distance education

Back in the 1990s, Cliff Adelman, then at the US Department of Education, did a pioneering study of student “swirl,” that is, students moving through several institutions, perhaps with work intervals along the way,before earning degrees.

“The proportion of undergraduate students attending more than one institution,” he wrote, “swelled from 40 percent to 54 percent … during the 1970s and 1980s, with even more dramatic increases in the proportion of students attending more than two institutions.” Adelman predicted that “…we will easily surpass a 60 percent multi-institutional attendance rate by the year 2000.”

Moving from campus to campus for classes is one step; taking classes at home is the next. And so distance education, long constrained by the slow pace and awkward pedagogy of correspondence courses, has come into its own. At first it was relegated to “nontraditional” or “experimental” institutions—Empire State College, Western Governors University, UNext/Cardean (a cautionary tale for another day), Kaplan. Then it went mainstream.

At first this didn’t work: fathom.com, for example, a collaboration among several first-tier research universities led by Columbia, found no market for its high-quality online offerings. (Its Executive Director has just written a thoughtful essay on MOOCs, drawing on her fathom.com experience.)

Today, though, a great many traditional colleges and universities successfully bring instruction and degree programs to distant students. Within the recent past these traditional institutions have expanded into non-degree efforts like OpenCourseWare and to broadcast efforts like the MOOC-based Coursera and edX. In 2008, 3.7% of students took all their coursework through distance education, and 20.4% took at least one class that way.

Learning management systems, electronic reserves, video jukeboxes, collaboration environments, streamed and recorded video lectures, online tutors, the innovations that helped us overcome past challenges—little of that progress was designed for swirling students who do not set foot on campus.

We know how to deliver effective instruction to motivated students at a distance. Among policy issues we have yet to resolve, we don’t yet know how to

  • confirm their identity,
  • assess their readiness,
  • guide their progress,
  • measure their achievement,
  • standardize course content,
  • construct and validate curriculum across diverse campuses, or
  • certify degree attainment

in this imminent world. Those aren’t just IT problems, of course. But solving them will almost certainly challenge IT.

The fundamental change that lies ahead for enterprise technologies
is the transition from campus IT to cloud and personal IT

The locus of control over all three principal elements of campus IT—servers and services, networks, and end-user devices and applications—is shifting rapidly from the institution to customers and third parties.

As recently as ten years ago, most campus IT services, everything from administrative systems through messaging and telephone systems to research technologies, were provided by campus entities using campus-based facilities, sometimes centralized and sometimes not. The same was true for the wired and then wireless networks that provided access to services, and for the desktop and laptop computers faculty, students, and staff used.

Today shared services are migrating rapidly to servers and systems that reside physically and organizationally elsewhere—the “cloud”—and the same is happening for dedicated services such as research computing. It’s also happening for networks, as carrier-provided cellular technologies compete with campus-provided wired and WiFi networking, and for end-user devices, as highly mobile personal tablets and phones supplant desktop and laptop computers.

As I wrote in an earlier post about “Enterprise IT,” the scale of enterprise infrastructure and services within IT and the shift in their locus of control have major implications for and the organizations that have provided it. Campus IT organizations grew up around locally-designed services running on campus-owned equipment managed by internal staff. Organization, staffing, and even funding models ensued accordingly. Even in academic computing and user support, “heavy metal” experience was valued highly. The shifting locus of control makes other skills at least as valuable: the ability to negotiate with suppliers, to engage effectively with customers (indeed, to think of them as “customers” rather than “users”), to manage spending and investments under constraint, to explain.

To be sure, IT organizations still require highly skilled technical staff, for example to fine-tune high-performance computing and networking, to ensure that information is kept secure, to integrate systems efficiently, and to identify and authenticate individuals remotely. But these technologies differ greatly from traditional heavy metal, and so must enterprise IT.

The rock, IT, and the hard place

In the long run, it seems to me that the campus IT organization must evolve rapidly to center on seven core activities.

Two of those are substantive:

  • making sure that researchers have the technologies they need, and
  • making sure that teaching and learning benefit from the best thinking about IT applications and effectiveness.

Four others are more general:

  • negotiating and overseeing relationships with outside providers;
  • specifying or doing what is necessary for robust integration among outside and internal services;
  • striking the right personal/institutional balance between security and privacy for networks, systems, and data; and last but not least
  • providing support to customers (both individuals and partner entities).

The seventh core activity, which should diminish over time, is

  • operating and supporting legacy systems.

Creative, energetic, competent staff are sine qua non for achieving that kind of forward-looking organization. It’s very hard to do good IT without good, dedicated people, and those are increasingly difficult to find and keep. Not least, this is because colleges and universities compete poorly with the stock options, pay, glitz, and technology the private sector can offer. Therein lies another challenge: promoting loyalty and high morale among staff who know they could be making more elsewhere.

To the extent the rock of e-learning and the hard place of enterprise IT frame our future, we not only need to rethink our organizations and what they do; we also need to rethink how we prepare, promote, and choose leaders for higher-education leaders on campus and elsewhere—the topic, fortuitously, of a recent ECAR report, and of widespread rethinking within EDUCAUSE.

We’ve been through this before, and risen to the challenge.

  • Starting around 1980, minicomputers and then personal computers brought IT out of the data center and into every corner of higher education, changing data center, IT organization, and campus in ways we could not even imagine.
  • Then in the 1990s campus, regional, and national networks connected everything, with similarly widespread consequences.

We can rise to the challenges again, too, but only if we understand their timing and the transformative implications.

Transforming Higher Education through Learning Technology: Millinocket?

Down East

Note to prospective readers: This post has evolved, through extensive revision and expansion and more careful citation, into a paper available at http://gjackson.us/it-he.pdf.

You might want to read that paper, which is much better and complete, instead of this post — unless you like the pictures here, which for the moment aren’t in the paper. Even if you read this to see the pictures, please go read the other.

“Which way to Millinocket?,” a traveler asks. “Well, you can go west to the next intersection…” the drawling down-east Mainer replies in the Dodge and Bryan story,

“…get onto the turnpike, go north through the toll gate at Augusta, ’til you come to that intersection…. well, no. You keep right on this tar road; it changes to dirt now and again. Just keep the river on your left. You’ll come to a crossroads and… let me see. Then again, you can take that scenic coastal route that the tourists use. And after you get to Bucksport… well, let me see now. Millinocket. Come to think of it, you can’t get there from here.”

PLATO and its programmed-instruction kin were supposed to transform higher education. So were the Apple II, and then the personal computer – PC and then Mac – and then the “3M” workstation (megapixel display, megabyte memory, megaflop speed) for which Project Athena was designed. So were simulated laboratories, so were BITNET and then the Internet, so were MUDs, so was Internet2, so was artificial intelligence, so was supercomputing.

Each of these most certainly has helped higher education grow, evolve, and gain efficiency and flexibility. But at its core, higher education remains very much unchanged. That may no longer suffice.

What about today’s technological changes and initiatives – social media, streaming video, multi-user virtual environments, mobile devices, the cloud? Are they to be evolutionary, or transformational? If higher education needs the latter, can we get there from here?

It’s important to start conversations about questions like these from a common understanding of information technologies that currently play a role in higher education, what that role is, and how technologies and their roles are progressing. That’s what prompted these musings.

Information Technology

For the most part, “information technology” means a tripartite array of hardware and software:

  • end-user devices, which today range from large desktop workstations to small mobile phones, typically with some kind of display, some way to make choices and enter text, and various other capabilities variously enabled by hardware and software;
  • servers, which comprise not just racks of processors, storage, and other hardware but rather are aggregations of hardware, software, applications, and data that provide services to multiple users (when the aggregation is elsewhere, it’s often called “the cloud” today); and
  • networks, wireless or wired, which interlink local servers, remote server clouds, and end-user devices, and which typically comprise copper and glass cabling, routers and switches and optronics, and network operating system plus some authentication and logging capability.

Information technology tends to progress rapidly but unevenly, with progress or shortcomings in one domain driving or retarding progress in others.

Today, for example, the rapidly growing capability of small smartphones has taxed previously underused cellular networks. Earlier, excess capability in the wired Internet prompted innovation in major services like Google and YouTube. The success of Google and Amazon forced innovation in the design, management, and physical location of servers.

Perhaps the most striking aspects of technological progress have been its convergence and integration. Whereas once one could reasonably think separately about servers, networks, and end-user devices, today the three are not only tightly interconnected and interdependent, but increasingly their components are indistinguishable. Network switches are essentially servers, servers often comprise vast arrays of the same processors that drive end-user devices plus internal networks, and end-user devices readily tackle tasks – voice recognition, for example – that once required massive servers.

Access to Information Technology

Progress, convergence, and integration in information technology have driven dramatic and fundamental change in the information technologies faculty, students, colleges, and universities have. That progress is likely to continue.

Here, as a result, are some assumptions we can reasonably make today:

  • Households have some level of broadband access to the Internet, and at least one computer capable of using that broadband access to view and interact with Web pages, handle email and other messaging, listen to audio, and view videos of at least YouTube quality .
  • Teenagers and most adults have some kind of mobile phone, and that phone usually has the capability to handle routine Internet tasks like viewing Web pages and reading email.
  • Colleges and universities have building and campus networks operating at broadband speeds of at least 10Mb/sec, and most have wireless networks operating at 802.11b (11Mb/sec) or greater speed.
  • Server capacity has become quite inexpensive, largely because “cloud” providers have figured out how to gain and then sell economy of scale.
  • Everyone – or at least everyone between the ages of, say, 12 and 65 – has at least one authenticated online identity, including email and other online service accounts; Facebook, Twitter, Google, or other social-media accounts; online banking, financial, or credit-card access; or network credentials from a school, college or university, or employer.
  • Everyone knows how to search on the Internet for material using Google, Bing, or other search engines.
  • Most people have a digital camera, perhaps integrated into their phone and capable of both still photos and videos, and they know how to send them to others or offload their photos onto their computers or an online service.
  • Most college and university course materials are in electronic form, and so is a large fraction of library and reference material used by the typical student.
  • Most colleges and universities have readily available facilities for creating video from lectures and similarly didactic events, whether in classrooms or in other venues, and for streaming or otherwise making that video available online.

It’s striking how many of these assumptions were invalid even as recently as five years ago. Most of the assumptions were invalid a decade before that (and it’s sobering to remember that the “3M” workstation was a lofty goal as recently as 1980 and cost nearly $10,000 in the mid-1980s, yet today’s iPhone almost exceeds the 3M spec).

Looking a bit into the future, here are some further assumptions that probably will be safe:

  • Typical home networking and computers will have improved to the point they can handle streamed video and simple two-way video interactions (which means that at least one home computer will have an add-on or built-in camera).
  • Most people will know how to communicate with individuals or small groups online through synchronous social media or messaging environments, in many cases involving video.
  • Authentication and monitoring technologies will exist to enable colleges and universities to reasonably ensure that their testing and assessment of student progress is protected from fraud.
  • Pretty much everyone will have the devices and accounts necessary for ubiquitous connectivity with anybody else and to use services from almost any college, university, or other educational provider.

Technology, Teaching, and Learning

In colleges and universities, as in other organizations, information technology can promote progress by enabling administrative processes to become more efficient and by creating diverse, flexible pathways for communication and collaboration within and across different entities. That’s organizational technology, and although it’s very important, it affects higher education much the way it affects other organizations of comparable size.

Somewhat more distinctively, information technology can become learning technology, an integral part of the teaching and learning process. Learning technology sometimes replaces traditional pedagogies and learning environments, but more often it enhances and expands them.

The basic technology and middleware infrastructure necessary to enable colleges and universities to reach, teach, and assess students appears to exist already, or will before long. This brings us to the next question: What applications turn information technology into learning technology?

To answer this, it’s useful to think about four overlapping functions of learning technology.

Amplify and Extend Traditional Pedagogies, Mechanisms, and Resources

For example, by storing and distributing materials electronically, by enabling lectures and other events to be streamed or recorded, and by providing a medium for one-to-one or collective interactions among faculty and students, IT potentially expedites and extends traditional roles and transactions. Similarly, search engines and network-accessible library and reference materials vastly increase faculty and students access. The effect, although profound, nevertheless falls short of transformational. Chairs outside faculty doors give way to “learning management systems” like Blackboard or Sakai or Moodle, wearing one’s PJs to 8am lectures gives way to watching lectures from one’s room over breakfast, and library schools become information-science schools. But the enterprise remains recognizable. Even when these mechanisms go a step further, enabling true distance education whereby students never set foot on campus (in 2011, 3.7% of all students took all their coursework through distance education), the resulting services remain recognizable. Indeed, they are often simply extensions of existing institutions’ campus programs.

Make Educational Events and Materials Available Outside the Original Context

For example, the Open Courseware initiative (OCW) started as publicly accessible repository of lecture notes, problem sets, and other material from MIT classes. It since has grown to include similar material from scores of other institutions worldwide. Similarly, the newer Khan Academy has collected a broad array of instructional videos on diverse topics, some from classes and some prepared especially for Khan, and made those available for anyone interested in learning the material. OCW, Khan, and initiatives like them provide instructional material in pure form, rather than as part of curricula or degree programs.

Enable Experience-Based Learning 

This most productively involves experience that otherwise might have been unaffordable, dangerous, or otherwise infeasible. Simulated chemistry laboratories and factories were an early example – students could learn to synthesize acetylene by trial and error without blowing up the laboratory, or to fine-tune just-in-time production processes without bankrupting real manufacturers. As computers have become more powerful, so have simulations become more complex and realistic. As simulations have moved to cloud-based servers, multi-user virtual environments have emerged, which go beyond simulation to replicate complex environments. Experiences like these were impossible to provide before the advent of powerful, inexpensive server clouds, ubiquitous networking, and graphically capable end-user devices.

Replace the Didactic Classroom Experience

This is the most controversial application of learning technology – “Why do we need faculty to teach calculus on thousands of different campuses, when it can be taught online by a computer?” – but also one that drives most discussion of how technology might transform higher education. It has emerged especially for disciplines and topics where instructors convey what they know to students through classroom lectures, readings, and tutorials. PLATO (Programmed Logic for Automated Teaching Operations) emerged from the University of Illinois in the 1960s as the first major example of computers replacing teachers, and has been followed by myriad attempts, some more successful than others, to create technology-based teaching mechanisms that tailor their instruction to how quickly students master material. (PLATO’s other major innovation was partnership with a commercial vendor, the now defunct Control Data Corporation.)

Higher Education

We now come to the $64 question: what role might trends in higher-education learning technology play in the potential transformation of higher education?

The transformational goal for higher education is to carry out its social and economic roles with greater efficiency and within the resource constraints. Many believe that such transformation requires a very different structure for future higher education. What might that structure be, and what role might information technologies play in its development?

The fundamental purpose of higher education is to advance society, polity, and the economy by increasing the social, political, and economic skills and knowledge of students – what economists call “human capital“. At the postsecondary level, education potentially augments students’ human capital four ways:

  • admission, which is to say declaring that a student has been chosen as somehow better qualified or more adaptable in some sense than other prospective students (this is part of Lester Thurow‘s “job queue” idea);
  • instruction, including core and disciplinary curricula, the essentially unidirectional transmission of concrete knowledge through lectures, readings, and like, and also the explication and amplification of that through classroom, tutorial, and extracurricular guidance and discussion (this is what we often mean by the narrow term “teaching”);
  • certification, specifically the measuring of knowledge and skill through testing and other forms of assessment; and
  • socialization, specifically learning how to become an effective member of society independently of one’s origin family, through interaction with faculty and especially with other students.

Sometimes a student gets all four together. For example, MIT marked me even before I enrolled as someone likely to play a role in technology (admission), taught me a great deal about science and engineering generally, electrical engineering in particular, and their social and economic context (instruction), documented through grades based on exams, lab work, and classroom participation that I had mastered (or failed to master) what I’d been taught (certification), and immersed me in an environment wherein data-based argument and rhetoric guided and advanced organizational life, and thereby helped me understand how to work effectively within organizations, groups, and society (socialization).

Most students attend college whose admissions processes amount to open admission, or involve simple norms rather than competition.  That is, anyone who meets certain standards, such as high-school completion with a given GPA or test score, is admitted. In 2010, almost half of all institutions reporting having no admissions criteria, and barely 11% accepted fewer than 1/4 of their applicants. Moreover, most students do not live on campus — in 2007-08, only 14% of undergraduates lived in college-owned housing. This means that most of higher education has limited admission and socialization effects. Therefore, for the most part higher education affects human capital through instruction and certification.

Instruction is an especially fertile domain for technological progress. This is because three trends converge around it:

  • ubiquitous connectivity, especially from students’ homes;
  • the rapidly growing corpus of coursework offered online, either as formal credit-bearing classes or as freestanding materials from entities like OCW or Khan; and
  • perhaps more speculative) the growing willingness of institutions to grant credit and allow students to satisfy requirements through classes taken at other institutions or through some kind of testing or assessment.

Indeed, we can imagine a future where it becomes commonplace for students to satisfy one institution’s degree requirements with coursework from many other institutions. Further down this road, we can imagine there might be institutions that admit students, prescribe curriculum, certify progress, and grant degrees – but have no instructional faculty and do not offer courses. This, in turn, might spawn purely instructional institutions.

One problem with such a future is that socialization, a key function of higher education, gets lost. This points the way to one major technology challenge for the future: Developing online mechanisms, for students who are scattered across the nation or the world, that provide something akin to rich classroom and campus interaction. Such interaction is central to the success of, for example, elite liberal-arts colleges and major residential universities. Many advocates of distance education believe that social media such as Facebook groups can provide this socialization, but that potential has yet to be realized.

A second problem with such a future is that robust, flexible methods for assessing student learning at a distance remain either expensive or insufficient. For example, ProctorU and Kryterion are two of several commercial entities that provide remote exam proctoring, but they do so through somewhat intensive use of video observation, and that only works for rather traditional written exams. For another example, in the aftermath of 9/11 many universities figured out how to conduct doctoral thesis defenses using high-bandwidth videoconferencing facilities rather than flying in faculty from other institutions, but this simply reduced travel expense rather than changed the basic idea that several faculty members would examine one student at a time.

Millinocket

If learning technologies are to transform higher education, we must exploit opportunities and address problems. At the same time, transformed higher education cannot neglect important dimensions of human capital. In that respect, our goal should be not only to make higher education more efficient than it is today, but also better.

Drivers headed for Millinocket rarely pull over any more to ask directions of drawling downeasters. Instead, they rely on the geographic position and information systems built into their cars or phones or computers, which in turn rely on network connectivity to keep maps and traffic reports up to date. To be sure, reliance on GPS and GIS tends to insulate drivers from interaction with the diversity they pass along the road, much as Interstate highways standardized cross-country travel. So the gain from those applications is not without cost.

The same is true for learning technology: it will yield both gains and losses. Effective progress will result only if we explore and understand the technologies and their applications, decide how these relate to the structure and goals of higher education, identify obstacles and remedies, and figure out how to get there from here.

IT and Post-Institutional Higher Education: Will We Still Need Brad When He’s 54?

“There are two possible solutions,” Hercule Poirot says to the assembled suspects in Murder on the Orient Express (that’s p. 304 in the Kindle edition, but the 1974 movie starring Albert Finney is way better than the book, and it and the book are both much better than the abominable 2011 PBS version with David Suchet). “I shall put them both before you,” Poirot continues, “…to judge which solution is the right one.”

So it is for the future role, organization, and leadership of higher-education IT. There are two possible solutions. There’s a reasonably straightforward projection how the role of IT in higher education will evolve into the mid-range future, but there’s also a more complicated one. The first assumes institutional continuity and evolutionary change. The second doesn’t.

IT Domains

How does IT serve higher education? Let me count the ways:

  1. Infrastructure for the transfer and storage of pedagogical, bibliographic, research, operational, and administrative information, in close synergy with other physical infrastructure such as plumbing, wiring, buildings, sensors, controls, roads, and vehicles. This includes not only hardware such as processors, storage, networking, and end-user devices, but also basic functionality such as database management and hosting (or virtualizing) servers.
  2. Administrative systems that manage, analyze, and display the information students, faculty, and staff need to manage their own work and that of their departments. This includes identity management, authentication, and other so-called “middleware” through which institutions define their communities.
  3. Pedagogical applications students and faculty need to enable teaching and learning, including tools for data analysis, bibliography, simulation, writing, multimedia, presentations, discussion, and guidance.
  4. Research tools faculty and students need to advance knowledge, including some tools that also serve pedagogy plus a broad array of devices and systems to measure, gather, simulate, manage, share, distill, analyze, display, and otherwise bring data to bear on scholarly questions.
  5. Community services to support interaction and collaboration, including systems for messaging, collaboration, broadcasting, and socialization both within campuses and across their boundaries.

“…A Suit of Wagon Lit Uniform…and a Pass Key…”

The straightforward projection, analogous to Poirot’s simpler solution (an unknown stranger committed the crime, and escaped undetected), stems from projections how institutions themselves might address each of the IT domains as new services and devices become available, especially cloud-based services and consumer-based end-user devices. The core assumptions are that the important loci of decisions are intra-institutional, and that institutions make their own choices to maximize local benefit (or, in the economic terms I mentioned in an earlier post, to maximize their individual utility.)

Most current thinking in this vein goes something like this:

  • We will outsource generic services, platforms, and storage, and perhaps
  • consolidate and standardize support for core applications and
  • leave users on their own insofar as commercial devices such as phones and tablets are concerned, but
  • we must for the foreseeable future continue to have administrative systems securely dedicated and configured for our unique institutional needs, and similarly
  • we must maintain control over our pedagogical applications and research tools since they help distinguish us from the competition.

Evolution based on this thinking entails dramatic shrinkage in data-center facilities, as virtualized servers housed in or provided by commercial or collective entities replace campus-based hosting of major systems. It entails several key administrative and community-service systems being replaced by standard commercial offerings — for example, the replacement of expense-reimbursement systems by commercial products such as Concur, of dedicated payroll systems by commercial services such as ADP, and of campus messaging, calendaring, and even document-management systems by more general services such as Google’s or Microsoft’s. Finally, thinking like this typically drives consolidation and standardization of user support, bringing departmental support entities into alignment if not under the authority of central IT, and standardizing requirements and services to reduce response times and staff costs.

How might higher-education IT evolve if this is how things go? In particular, what effects would it have on IT organization, and leadership?

One clear consequence of such straightforward evolution is a continuing need for central guidance and management across essentially the current array of IT domains. As I tried to suggest in a recent article, the nature of that guidance and management would change, in that control would give way to collaboration and influence. But institutions would retain responsibility for IT functions, and it would remain important for important systems to be managed or procured centrally for the general good. Although the skills required of the “chief information officer” would be different, CIOs would still be necessary, and most cross-institutional efforts would be mediated through them. Many of those cross-institutional efforts would involve coordinated action of various kinds, ranging from similar approaches to vendors through collective procurement to joint development.

We’d still need Brads.

“Say What You Like, Trial by Jury is a Sound System…”

If we think about the future unconventionally (as Poirot does in his second solution — spoiler in the last section below!), a somewhat more radical, extra-institutional projection emerges. What if Accenture, McKinsey, and Bain are right, and IT contributes very little to the distinctiveness of institutions — in which case colleges and universities have no business doing IT idiosyncratically or even individually?

In that case,

  • we will outsource almost all IT infrastructure, applications, services, and support, either to collective enterprises or to commercial providers, and therefore
  • we will not need data centers or staff, including server administrators, programmers, and administrative-systems technical staff, so that
  • the role of institutional IT will be largely to provide only highly tailored support for research and instruction, which means that
  • in most cases means there will be little to be gained from centralizing IT,
  • it will make sense for academic departments to do their own IT, and
  • we can rely on individual business units to negotiate appropriate administrative systems and services, and so
  • the balance will shift from centralized to decentralized IT organization and staffing.

What if we’re right that mobility, broadband, cloud services, and distance learning are maturing to the point where they can transform education, so that we have simultaneous and similarly radical change on the academic front?

Despite changes in technology and economics, and some organizational evolution, higher education remains largely hierarchical. Vertically-organized colleges and universities grant degrees based on curricula largely determined internally, curricula largely comprise courses offered by the institution, institutions hire their own faculty to teach their own courses, and students enroll as degree candidates in a particular institution to take the courses that institution offers and thereby earn degrees. As Jim March used to point out, higher education today (well, okay, twenty years ago, when I worked with him at Stanford) is pretty similar to its origins: groups sitting around on rocks talking about books they’ve read.

It’s never been that simple, of course. Most students take some of their coursework from other institutions, some transfer from one to another, and since the 1960s there have been examples of network-based teaching. But the model has been remarkably robust across time and borders. It depends critically on the metaphor of the “campus”, the idea that students will be in one place for their studies.

Mobility, broadband, and the cloud redefine “campus” in ways that call the entire model into question, and thereby may transform higher education. A series of challenges lies ahead on this path. If we tackle and overcome these challenges, higher education, perhaps even including its role in research, could change in very fundamental ways.

The first challenge, which is already being widely addressed in colleges, universities, and other entities, is distance education: how to deliver instruction and promote learning effectively at a distance. Some efforts to address this challenge involve extrapolating from current models (many community colleges, “laptop colleges”, and for-profit institutions are examples of this), some involve recycling existing materials (Open CourseWare, and to a large extent the Khan Academy), and some involve experimenting with radically different approaches such as game-based simulation. There has already been considerable success with effective distance education, and more seems likely in the near future.

As it becomes feasible to teach and learn at a distance, so that students can be “located” on several “campuses” at once, students will have no reason to take all their coursework from a single institution. A question arises: If coursework comes from different “campuses”, who defines curriculum? Standardizing curriculum, as is already done in some professional graduate programs, is one way to achieve address this problem — that is, we may define curriculum extra-institutionally, “above the campus”. Such standardization requires cross-institutional collaboration, oversight from professional associations or guilds, and/or government regulation. None of this works very well today, in part because such standardization threatens institutional autonomy and distinctiveness. But effective distance teaching and learning may impel change.

As courses relate to curricula without depending on a particular institution, it becomes possible to imagine divorcing the offering of courses from the awarding of degrees. In this radical, no-longer-vertical future, some institutions might simply sell instruction and other learning resources, while others might concentrate on admitting students to candidacy, vetting their choices of and progress through coursework offered by other institutions, and awarding degrees. (Of course, some might try to continue both instructing and certifying.) To manage all this, it will clearly be necessary to gather, hold, and appraise student records in some shared or central fashion.

To the extent this projection is valid, not only does the role of IT within institutions change, but the very role of institutions in higher education changes. It remains important that local support be available to support the IT components of distinctive coursework, and of course to support research, but almost everything else — administrative and community services, infrastructure, general support — becomes either so standardized and/or outsourced as to require no institutional support, or becomes an activity for higher education generally rather than colleges or universities individually. In the extreme case, the typical institution really doesn’t need a central IT organization.

In this scenario, individual colleges and universities don’t need Brads.

“…What Should We Tell the Yugo-Slavian Police?”

Poirot’s second solution to the Ratchett murder (everyone including the butler did it) requires astonishing and improbable synchronicity among a large number of widely dispersed individuals. That’s fine for a mystery novel, but rarely works out in real life.

I therefore don’t suggest that the radical scenario I sketched above will come to pass. As many scholars of higher education have pointed out, colleges and universities are organized and designed to resist change. So long as society entrusts higher education to colleges and universities and other entities like them, we are likely to see evolutionary rather than radical change. So my extreme scenario, perhaps absurd on its face, seeks to only to suggest that we would do well to think well beyond institutional boundaries as we promote IT in higher education and consider its transformative potential.

And more: if we’re serious about the potentially transformative role of mobility, broadband, and the cloud in higher education, we need to consider not only what IT might change but also what effects that change will have on IT itself — and especially on its role within colleges and universities and across higher education.

Network Neutrality: Who’s Involved? What’s the Issue? Why Do We Give a Shortstop?

Who’s on First, Abbott and Costello’s classic routine, first reached the general public as part of the Kate Smith Radio Hour in 1938. It then appeared on almost every radio network at some time or another before reaching TV in the 1950s. (The routine’s authorship, as I’ve noted elsewhere, is more controversial than its broadcast history.) The routine can easily be found many places on the Internet – as a script, as audio recordings, or as videos. Some of its widespread availability is from widely-used commercial services (such as YouTube), some is from organized groups of fans, and some is from individuals. The sources are distributed widely across the Internet (in the IP-address sense).

I can easily find and read, listen to, or watch Who’s on First pretty much regardless of my own network location. It’s there through the Internet2 connection in my office, through my AT&T mobile phone, through my Sprint mobile hotspot, through the Comcast connections where I live, and through my local coffeeshops’ wireless in DC and Chicago.

This, most of us believe, is how the Internet should work. Users and content providers pay for Internet connections, at rates ranging from by buying coffee to thousands of dollars, and how fast one’s connection is thus may vary by price and location. One may need to pay providers for access, but the network itself transmits similarly no matter where stuff comes from, where it’s going, or what its substantive content is. This, in a nutshell, is what “network neutrality” means.

Yet network neutrality remains controversial. That’s mostly for good, traditional political reasons. Attaining network neutrality involves difficult tradeoffs among the economics of network provision, the choices available to consumers, and the public interest.

Tradeoffs become important when they affect different actors differently. That’s certainly the case for network neutrality:

  • Network operators (large multifunction ones like AT&T and Comcast, large focused ones like Verizon and Sprint, small local ones like MetroPCS, and business-oriented ones like Level3) want the flexibility to invest and charge differently depending on who wants to transmit what to whom, since they believe this is the only way to properly invest for the future.
  • Some Internet content providers (which in some cases, like Comcast, are are also networks) want to know that what they pay for connectivity will depend only on the volume and technical features of their material, and not vary with its content, whereas others want the ability to buy better or higher-priority transmission for their content than competitors get — or perhaps to have those competitors blocked.
  • Internet users want access to the same material on the same terms regardless of who they are or where they are on the network.

Political perspectives on network neutrality thus vary depending on who is proposing what conditions for whose network.

But network neutrality is also controversial because it’s misunderstood. Many of those involved in the debate either don’t – or won’t – understand what it means for a public network to be neutral, or indeed what the difference is between a public and a private network. That’s as true in higher education as it is anywhere else. Before taking a position on network neutrality or whose job it is to deal with it, therefore, it’s important to define what we’re talking about. Let me try to do that.

All networks discriminate. Different kinds of network traffic can entail different technical requirements, and a network may treat different technical requirements differently. E-mail, for example, can easily be transmitted in bursts – it really doesn’t matter if there’s a fifty-millisecond delay between words – whereas video typically becomes jittery and unsatisfactory if the network stream isn’t steady. A network that can handle email may not be able to handle video. One-way transmission (for example, a video broadcast or downloading a photo) can require very different handling than a two-way transmission (such as a videoconference). Perhaps even more basic, networks properly discriminate between traffic that respects network protocols – the established rules of the road, if you will – and traffic that attempts to bypass rule-based network management.

Network neutrality does not preclude discrimination. Rather, as I wrote above, a network behaves neutrally if it avoids discriminating on the basis of (a) where transmission originates, (b) where transmission is destined, and (c) the content of the transmission. The first two elements of network neutrality are relatively straightforward, but the third is much more challenging. (Some people also confuse how fast their end-user connection is with how quickly material moves across the network – that is, someone paying for a 1-megabit connection considers the Internet non-neutral if they don’t get the same download speeds as someone paying for a 26-megabit connection – but that’s a separate issue largely unrelated to neutrality.) In particular, it can be difficult to distinguish between neutral discrimination based on technical requirements and non-neutral discrimination based on a transmission’s substance.In some cases the two are inextricably linked.

Consider several ways network operators might discriminate with regard to Who’s on First.

  • Alpha Networks might decide that its network simply can’t handle video streaming, and therefore might configure its systems not to transmit video streams. If a user tries to watch a YouTube version of the routine, it won’t work if the transmission involves Alpha Networks. The user will still be able to read the script or listen to an audio recording of the routine (for example, any of those listed in the Media|Audio Clips section of http://www.abbottandcostello.net/). Although refusing to carry video is clearly discrimination, it’s not discrimination based on source, destination, or content. Alpha Networks therefore does not violate network neutrality.
  • Beta Networks might be willing to transmit video streams, but only from providers that pay it to do so. Say, purely hypothetically, that the Hulu service – jointly owned by NBC and Fox – were to pay Beta Networks to carry its video streams, which include an ad-supported version of Who’s on First. Say further that Google, whose YouTube streams include many Who’s on First examples, were to decline to pay. If Beta Networks transmitted Hulu’s versions but not Google’s, it would be discriminating on the basis of source – and probably acting non-neutrally.

What if Hulu and Google use slightly different video formats? Beta might claim that carrying Hulu’s traffic but not Google’s was merely technical discrimination, and therefore neutral. Google would probably disagree. Who resolves such controversies – market behavior, the courts, industry associations, the FCC – is one of the thorniest points in the national debate about network neutrality. Onward…

  • Gamma Networks might decide that Who’s on First ridicules and thus disparages St. Louis (many performances of the routine refer to “the St Louis team”, although others refer to the Yankees). To avoid offending customers, Gamma might refuse to transmit Who’s on First, in any form, to any user in Missouri. That would be discrimination on the basis of destination. Gamma would violate the neutrality principle.
  • Delta Networks, following Gamma’s lead, might decide that Who’s on First disparages not just St. Louis, but professional baseball in general. Since baseball is the national pastime, and perhaps worried about lawsuits, Delta Networks might decide that Who’s on First should not be transmitted at all, and therefore it might refuse to carry the routine in any form. That would be discrimination on the basis of content. Delta would be violating the neutrality principle.
  • Epsilon Networks, a competitor to Alpha, might realize that refusing to carry video disserves customers. But Epsilon faces the same financial challenges as Alpha. In particular, it can’t raise its general prices to cover the expense of transmitting video since it would then lose most of its customers (the ones who don’t care about video) to Alpha’s lesser but less expensive service. Rather than block video, Epsilon might decide to install equipment that will enable video as a specially provided service for customers who want it, and to charge those customers – but not its non-video customers – extra for the added capability. Whether an operator violates network neutrality by charging more for special network treatment of certain content – the usual term for this is “managed services” – is another one of the thorniest issues in the national debate.

As I hope these examples make clear, there are various kinds of network discrimination, and whether they violate network neutrality is sometimes straightforward and sometimes not.  Things become thornier still if networks are owned by content providers or vice versa – or, as is more typical, if there are corporate kinships between the two. Hulu, for example, is partly owned by NBC Universal, which is becoming part of Comcast. Can Comcast impose conditions on “outside” customers, such as Google’s YouTube, that it does not impose on its own corporate cousin?

Why do we give a shortstop (whose name, lest you didn’t read to the end of the Who’s on First script, is “darn”)? That is, why is network neutrality important to higher education? There are two principal reasons.

First, as mobility and blended learning (the combination of online and classroom education) become commonplace in higher education, it becomes very important that students be able to “attend” their college or university from venues beyond the traditional campus. To this end, it is very important that colleges and universities be able to provide education to their students and interconnect researchers over the Internet. This should be constrained only by the capacity of the institution’s connection to the Internet, the technical characteristics of online educational materials and environments, and the capacity of students’ connections to the Internet.

Without network neutrality, achieving transparent educational transmission from campus to widely-distributed students could become very difficult. The quality of student experience could come to depend on the politics of the network path from campus to student.To address this, each college and university would need to negotiate transmission of its materials with every network operator along the path from campus to student. If some of those network operators negotiate exclusive agreements for certain services with commercial providers – or perhaps with other colleges or universities – it could become impossible to provide online education effectively.

Second, many colleges and universities operate extensive networks of their own, or together operate specialized inter-campus networks for education, research, administrative, and campus purposes. Network traffic inconsistent with or detrimental to these purposes is managed differently than traffic that serves them. It is important that colleges and universities retain the ability to manage their networks in support of their core purposes.

Networks that are operated by and for the use of particular organizations, like most college and university networks, are private networks. Private and public networks serve different purposes, and thus are managed based on different principles. The distinction is important because the national network-neutrality debate – including the recent FCC action, and its evolving judicial, legislative, and regulatory consequences – is about public networks.

Private networks serve private purposes, and therefore need not behave neutrally. They are managed to advance private goals. Public networks, on the other hand, serve the public interest, and so – network-neutrality advocates argue – should be managed in accordance with public policy and goals. Although this seems a clear distinction, it can become murky in practice.

For example, many colleges and universities provide some form of guest access to their campus wireless networks, which anyone physically on campus may use. Are guest networks like this public or private? What if they are simply restricted versions of the campus’s regular network? Fortunately for higher education, there is useful precedent on this point. The Communications Assistance for Law Enforcement Act (CALEA), which took effect in 1995, established principles under which most college and university campus networks are treated as private networks – even if they provide a limited set of services to campus visitors (the so-called “coffee shop” criterion).

Higher education needs neutrality on public networks because those networks are increasingly central to education and research. At the same time, higher education needs to manage campus networks and private networks that interconnect them in support of education and research, and for that reason it is important that there be appropriate policy differentiation between public and private networks.

Regardless, colleges and universities need to pay for their Internet connectivity, to negotiate in good faith with their Internet providers, and to collaborate effectively on the provision and management of campus and inter-campus networks. So long as colleges and universities act effectively and responsibly as network customers, they need assurance that their traffic will flow across the Internet without regard to its source, destination, or content.

And so we come to the central question: Assuming that higher education supports network neutrality for public networks, do we care how its principles – that public networks should be neutral, and that private ones should be manageable for private purposes – are promulgated, interpreted, and enforced? Since the principles are important to us, as I outlined above, we care that they be implemented effectively, robustly, and efficiently. Since the public/private distinction seems to be relatively uncontroversial and well understood, the core issue is whether and how to address network neutrality for public networks.

There appear to be four different ideas about how to implement network neutrality.

  1. A government agency with the appropriate scope, expertise, and authority could spell out the circumstances that would constitute network neutrality, and prescribe mechanisms for correcting circumstances that fell short of those. Within the US, this would need to be a federal agency, and the only one arguably up to the task is the Federal Communications Commission. The FCC has acted in this way, but there remain questions whether it has the appropriate authority to proceed as it has proposed.
  2. The Congress could enact laws detailing how public networks must operate to ensure network neutrality. In general, it has proven more effective for the Congress to specify a broad approach to a public-policy problem, and then to create and/or empower the appropriate government agency to figure how what guidelines, regulations, and redress mechanisms are best. Putting detail into legislation tends to enable all kinds of special negotiations and provisions, and the result is then quite hard to change.
  3. The networking industry could create an internal body to promote and enforce network neutrality, with appropriate powers to take action when its members fail to live up to neutrality principles. Voluntary self-regulatory entities like this have been successful in some contexts and not in others. Thus far, however, the networking industry is internally divided as to the wisdom of network neutrality, and without agreement on the principle it is hard to see how there could be agreement on self-regulation.
  4. Network neutrality could simply be left to the market. That is, if network neutrality is important to customers, they will buy services from neutral providers and avoid services from non-neutral providers. The problem here is that network neutrality must extend across diverse networks, and individual consumers – even if they are large organizations such as many colleges and universities – interact only with their own “last mile” provider.

Those of us in higher education who have been involved in the network-neutrality debates have come to believe that among these four approaches the first is most likely to yield success and most likely to evolve appropriately as networking and its applications evolve. This is especially true for wireless (that is, cellular) networking, where there remain legitimate questions about what level of service should be subject to neutrality principles, and what kinds of service might legitimately be considered managed, extra-cost services.

In theory, the national debate about network neutrality will unfold through four parallel processes. Two of these are already underway: the FCC has issued an order “to Preserve Internet Freedom and Openness”, and at least two network operators have filed lawsuits challenging the FCC’s authority to do that. So we already have agency and court involvement, and we can possiible congressional actions and industry initiatives to round out the set.

One thing’s sure: This is going to become more complicated and confusing…

Lou: I get behind the plate to do some fancy catching, Tomorrow’s pitching on my team and a heavy hitter gets up. Now the heavy hitter bunts the ball. When he bunts the ball, me, being a good catcher, I’m gonna throw the guy out at first base. So I pick up the ball and throw it to who?

Bud: Now that’s the first thing you’ve said right.

Lou: I don’t even know what I’m talking about!

Parsing Mobility

Old news from the buzzword-bingo front: “Cloud” is giving way to “Mobility” as the word to work into every presentation.

To many people, “mobile computing” means a small device interacting with a massively interconnected set of cloud-based databases and computational engines. From that perspective, mobile computing isn’t an emerging technology in its own right, but rather a window into a maturing one, and so not very interesting technologically.

Delve deeper, and “mobile computing” becomes very interesting — not as a technology, but rather as an aggregation of several technologies whose evolutionary paths overlap in a particularly fertile way. Understanding the emergence of mobile technology thus requires parsing it into its components — and the same goes for guessing about the future.

In no particular order, what follow are the technologies I think underlie the current transformation of our lives through mobile computing, and how they’re likely to evolve. Please add to and/or comment on the list!

  • Pervasive, transparent wireless. What we need, and what’s likely to emerge, is a combination of technology and business practices that enable people simply to stop thinking about how their devices are connected. Right now connectivity might be by 802.11 WiFi, and the WiFi might use any of several authentication and security technologies, or it might be cellular, where how you connect depends on which carrier your device likes, or it might be one of the emerging non-cellular, non-WiFi technologies like WiMax. The key technology that has yet to emerge is a mechanism for reconciling and federating the diverse identities people already use to get wireless access.
  • Federated identity and attribution. I have somewhere north of 20 email addresses, plus almost as many phone numbers, some bank accounts, a wallet full of credit cards, and several membership IDs. Eventually there needs to be some way to communicate the relevant dimensions of these identity icons from mobile devices into the cloud or vice versa — and to do so without communicating the irrelevant dimensions or exposing us to identity thieves. Moreover, the sources that identify me may be different from the sources that identify others, and these different sources need some kind of interlinking trust chain. Without these kinds of federated, limited, focused mechanisms for sharing attributes, it will remain awkward to integrate mobile devices into the commercial fabric.
  • Haptic interfaces. The touch screen is rapidly complementing and in many cases supplanting the keyboard and pointing device on small devices, and it is beginning to do so on larger ones (eg, iPad, Kindle, etc). Touch isn’t the only haptic technology that’s emerging rapidly, though — there are also three-dimensional technologies like the field sensors used in Wii controllers. We’re going to see rapid progress here.
  • Solid-state storage. It’s interesting to remember that the original iPod actually had a spinning hard drive in it — that would be unthinkable in a small device today, where flash memory reigns supreme, and it’s becoming unthinkable in light laptops (eg, the small Dells like mine, and the new MacBook Air), and we’ll see that progression continue.
  • Low-power processors. Without these and the next item, devices really aren’t “mobile”; rather, they’re temporarily detachable. Getting processors to consume less energy and put off less heat is critical to both run time and miniaturization. We’ll see immense progress here, I think, and that will gradually erase the difference between the flexibility of “portable computers” and the long run-times of “mobile devices”.
  • Power. Mobile computing would be infeasible without compact lithium-ion batteries, but they’re only one step along a continuum that eventually yields some combination of wireless energy supplies (presumably solar, but some based on body heat and kinetics might re-emerge — if you’re as old as me, you’ll remember so-called “self-winding” watches, whose springs were wound by a delicately balanced internal swing arm that spun around as we walked). New battery types or capabilities are also possibilities.
  • Displays. We’re still pretty much confined to displays that require some kind of rigid (or rigidly supported) surface, be it a liquid-crystal mechanism of some kind (many laptop displays, plus Kindles and other “electronic ink” devices) or some kind of charged-glass mechanism (such as iPhones and iPads, also some laptops and plasma screens). Cheap little projectors are another technology that’s playing in this space, but they only work in the dark, and then not very well. Eventually someone is going to figure out how to produce displays that are flexible, even rollable or foldable, while still being capable of detailed rendering and some kind of haptic input.
  • Encryption. As outsiders seek access to individual data and communications and individuals become worried about that, we’ll see demand for simple yet secure encryption mechanisms. The problem is how to balance simplicity of use, security level, and recoverability. It’s easy to develop encryption that’s easy to use, but often that makes it less secure and/or makes it hard to recover data if one forgets one’s password. Solving either of the latter problems typically reduces simplicity, and so on around the circle.
  • Sensors. Many mobile devices already have some kind of location sensor (GPS or cell-tower triangulation), often accompanied by a compass and an accelerometer. Pressure sensors, thermometers, magnetic-field detectors, bar-code readers, weather-radio receivers, speech parsers, fingerprint readers, and other sensors are also becoming common — and some of them, like Shazam, are quite astonishing. Gradually our mobile devices will need less and less information from us.

The point is, the emergence of mobile technology isn’t unidimensional — it’s not Dick Tracy’s wrist radio becoming a wrist TV. Rather, it comprises the simultaneous emergence and confluence of several otherwise distinct technologies.

It’s inevitable that both the emergence and the confluence will continue in ways we can scarcely imagine. This yet another example of the maxim we always need to remember: everything, even technological progress, is connected to everything else!