Posts Tagged ‘“College”’

Network Neutrality? Again? What’s Different?

The last time I wrote about network neutrality, higher education was deeply involved in the debate, especially through the Association of Research Libraries and EDUCAUSE, whose policy group I then headed. We supported a proposal by the then Federal Communications Commission (FCC) chairman, Julius Genachowski, to require public non-managed last-mile networks to transmit end-user Internet traffic neutrally.

We worried that otherwise those networks might favor commercial over intellectual content, and so make it difficult for off-campus students to access course, library, and other campus content, and for campus entities such as libraries to access content on other campuses or in central shared repositories. (The American Library Association had similar worries on behalf of public libraries and their patrons.) Almost as a footnote, we opposed so-called “paid prioritization”, an ill-defined concept, rarely implemented, but now reborn as “Internet fast lanes”.

Although courts overturned the FCC on neutrality, for the most part its key principle has held: traffic should flow across the Internet without regard for its source, its destination, or its content.

But the paid-prioritization footnote is pushing its way back into the main text. It’s doing so in a particularly arcane way, but one that may have serious implications for higher education. Understanding this requires some definitions. After addressing those (as Steve Worona points out, an excellent Wired article has even more on how the Internet, peering, and content delivery networks work), I’ll  turn to current issues and higher education’s interests.

What Is Network Neutrality?

To be “neutral”, in the FCC’s earlier formulation, a network must transmit public Internet traffic equivalently without regard for its source, its destination, or its content. Public Internet traffic means traffic that involves externally accessible IP addresses. A network can discriminate on the basis of type–for example, treat streaming video differently from email. But a neutral network cannot discriminate on source, destination, or content within a given type of traffic. A network can  treat special traffic such as cable TV programming or cable-based telephony–”managed services”, in the jargon–differently than regular public Internet traffic, although this is controversial since the border is murky. More controversial still, given current trends, is the exclusion of cellular wireless Internet traffic (but not WiFi) from neutrality requirements.

Pipes

The word “transmit” is important, because it’s different from “send” and “receive”. Users connect computers, servers, phones, television sets, and other devices to networks. They choose and pay for the capacity of their connection (the “pipe”, in the usual but imperfect plumbing analogy) to send and receive network traffic. Not all pipes are the same, and it’s perfectly acceptable for a network to provide lower-quality pipes–slower, for example–to end users who pay less, and to charge customers differently depending on where they are located. But a neutral network must provide the same quality of service to those who pay for the same size, quality, and location of “pipe”.

A user who is mostly going to send and receive small amounts of text (such as email) can get by with very modest and inexpensive capacity. One who is going to view video needs more capacity, one who is going to use two-way videoconferencing needs even more, and a commercial entity that is going to transmit multiple video streams to many customers needs lots. Sometimes the capacity of connections is fixed–one pays for a given capacity regardless of whether one uses it all–and sometimes their capacity and cost adjust dynamically with use. But in all cases one is merely paying for a connection to the network, not for how quickly traffic will get to or arrive from elsewhere. That last depends on how much someone is paying at the other end, and on how well the intervening networks interconnect. Whether one can pay for service quality other than the quality of one’s own connection is central to the current debate.

Users

It’s also important to consider two different (although sometimes overlapping) kinds of users: “end users” and “providers”. In general, providers deliver services to end users, sometimes content (for example, Netflix, the New York Times, or Google Search), sometimes storage (OneDrive, Dropbox), sometimes communications (Gmail, Xfinity Connect), and sometimes combinations of these and other functionality (Office Online, Google Apps).

The key distinctions between providers and end users are scale and revenue flow. The typical provider serves thousands if not millions of end users; the typical end user uses more than a few but rarely more than a few hundred providers. End users provide revenue to providers, either directly or by being counted; providers receive revenue (or sometimes other value such as fame) from end users or advertisers, and use it to fund the services they provide.

Roles

Networks (and therefore network operators) can play different roles in transmission: “first mile”, “last mile”, “backbone”, and “peering”. Providers connect to first-mile networks. End users do the same to last-mile networks. (First-mile and last-mile networks are mirror images of each other, of course, and can swap roles, but there’s always one of each for any traffic.) Sometimes first-mile networks connect directly to last-mile networks, and sometimes they interconnect indirectly using backbones, which in turn can interconnect with other backbones. Peering is how first-mile, last-mile, and backbone networks interconnect.

To use another imperfect analogy, first mile networks are on-ramps to backbone freeways, last-mile networks are off-ramps, and peering is where freeways interconnect. But here’s why the analogy is  imperfect: sometimes providers connect directly to backbones, and sometimes first-mile and last-mile networks have their own direct peering interconnections, bypassing backbones. Sometimes, as the Wired article points out, providers pay last-mile networks to host their servers, and sometimes special content-distribution systems such as Akamai do roughly the same. Those imperfections account for much of the current controversy.

Consider how I connect the Mac on my desk in Comcast‘s downtown office (where a few of us from NBCUniversal also work) to hostmonster.com, where this blog lives. I connect to the office wireless, which gives me a private (10.x.x.x) IP address. That goes to an internal (also private) router in Philadelphia, which then connects to Comcast’s public network. Comcast, as the company’s first-mile network, takes the traffic to Pennsylvania, then to Illinois, then back east to Virginia. There Comcast has a peering connection to Cogent, which is Hostmonster’s first-mile network provider. Cogent carries my traffic from Virginia to Illinois, Missouri, Colorado, and Utah, where Hostmonster is located and connects to Cogent.

If Comcast and Cogent did not have a direct connection, then my traffic would flow through a backbone such as Level3. If Hostmonster placed its servers in Comcast data centers, my traffic would be all-Comcast. As I’ll note repeatedly, this issue–how first-mile, last-mile, and backbones peer, and how content providers deal with this–is driving much of today’s network-neutrality debate. So is the increasing consolidation of the last-mile network business.

Public/Private

“Public” networks are treated differently than “private” ones. Generally speaking, if a network is open to the general public, and charges them fees to use it, then it’s a public network. If access is mostly restricted to a defined, closed community and does not charge use fees, then it’s a private network. The distinction between public and private networks comes mostly from the Communications Assistance to Law Enforcement Act (CALEA), which took effect in 1995. CALEA required “telecommunications carriers” to assist police and other law enforcement, notably by enabling court-approved wiretaps.

Even for traditional telephones, it was not entirely clear which “telecommunications carriers” were covered–for example, what about campus-run internal telephone exchanges?–and as CALEA extended to the Internet the distinction became murkier. Eventually “open to the general public, and charges them fees” provided a practical distinction, useful beyond CALEA.

Most campus networks are private by this definition. So are my home network, the network here in the DC Comcast office, and the one in my local Starbucks. To take the roadway analogy a step further, home driveways, the extensive network of roads within gated residential communities (even a large one such as Kiawah Island), and roadways within large industrial facilities (such as US Steel’s Gary plant) are private. City streets, state highways, and Interstates are public. (Note that the meaning of “public network” in Windows, MacOS, or other security settings is different.)

Neutrality

In practice, and in most of the public debate until recently, the term “network neutrality” has meant this: except in certain narrow cases (such as illegal uses), a neutral-network operator does not prioritize traffic over the last mile to or from an end user according to the source of the traffic, who the end user is, or the content of the traffic. Note the important qualification: “over the last mile”.

An end user with a smaller, cheaper connection will receive traffic more slowly than one who pays for a faster connection, and the same is true for providers sending traffic. The difference may be more pronounced for some types of traffic (such as video) than for others (email). Other than this, however, a neutral network treats all traffic the same. In particular, the network operator does not manipulate the traffic for its own purposes (such as degrading a competitor’s service), and does not treat end users or providers differently except to the extent they pay for the speed or other qualities of their own network connections.

“Public” networks often claim to be neutral, at least to some degree; “private” ones rarely do. Most legislative and regulatory efforts to promote network neutrality focus on public networks.

Enough definition. What does this all mean for higher education, and in particular how is that meaning different from what I wrote about back in 2011?

The Rebirth of Paid Prioritization

Where once the debate centered on last-mile neutrality for Internet traffic to and from end users, which is relatively straightforward and largely accepted, it has now expanded to include both Internet and “managed services” over the full path from provider to end user, which is much more complicated and ambiguous.

An early indicator was AT&T’s proposal to let providers subsidize the delivery of their traffic to AT&T cellular-network end users, specifically by allowing providers to pay the data costs associated with their services to end users. That is, providers would pay for how traffic was delivered and charged to end users. This differs fundamentally from the principle that the service end users receive depends only on what end users themselves pay for. Since cellular networks are not required to be neutral, AT&T’s proposal violated no law or regulation, but it nevertheless triggered opposition: It implied that AT&T’s customers would receive traffic (ads, downloads, or whatever) from some providers more advantageously–that is, more cheaply–than equivalent traffic from other providers. End user would have no say in this, other than to change carriers. Thus far AT&T’s proposal has attracted few providers, but this may be changing.

Then came the running battles between Netflix, a major provider, and last-mile providers such as Comcast and Verizon. Netfllix argued that end users were receiving its traffic less expeditiously than other providers’ traffic, that this violated neutrality principles, and that last-mile providers were responsible for remedying this. The last-mile providers rejected this argument: in their view the problem arose because Netfllix’s first-mile network (as it happens, Cogent, the same one Hostmonster uses) was unwilling to pay for peering connections capable of handling Netflix’s traffic (which can amount to more than a quarter of all Internet traffic some evenings). In the last-mile networks’ view, Netflix’s first-mile provider was responsible for fixing the problem at its (and therefore presumably Netflix’s) expense. The issue is, who pays to ensure sufficient peering capacity? Returning to the highway metaphor, who pays for sufficient interchange ramps between toll roads, especially when most truck traffic is in one direction?

In the event Netflix gave in, and arranged (and paid for) direct first-mile connections to Comcast, Verizon, and other last-mile providers. But Netflix continues to press its case, and its position has relevance for higher education.

Colleges and Universities

Colleges and universities have traditionally taken two positions on network neutrality. Representing end users, including their campus community and distant students served over the Internet, higher education has taken a strong position in support of the FCC’s network-neutrality proposals, and even urged that they be extended to cover cellular networks. As operators of networks funded and designed to support campuses’ instructional, research, and administrative functions, however, higher education also has taken the position that campus networks, like home, company, and other private networks, should continue to be exempted from network-neutrality provisions.

These remain valid positions for higher education to take in the current debate, and indeed the principles recently posted by EDUCAUSE and various other organizations do precisely that. But the emergence of concrete paid-prioritization services may require more nuanced positions and advocacy.  This is partly because the FCC’s positions have shifted, and partly because the technology and the debate have evolved.

Why should colleges and universities care about this new network-neutrality battleground? Because in addition to representing end users and operating private networks, campuses are increasingly providing instruction to distant students over the Internet. Massively open online courses (MOOCs) and other distance-education services often involve streamed or two-way video. They therefore require high-quality end-to-end network connections.

In most cases, campus network traffic to distant student flows over the commercial Internet, rather than over Internet2 or regional research and education (R&E) networks. Whether it reaches students expeditiously depends not only on the campus’s first-mile connection (“first mile” rather than “last mile” because the campus is now a provider rather than simply representing end users), but also on how the campus’s Internet service provider connects to backbones and/or to students’ last-mile networks–and of course on whether distant students have paid for good enough connections. This is similar to Netflix’s situation.

Unlike Netflix, however, individual campuses probably cannot afford to pay for direct connections to all of their students’ last-mile networks, or to place servers in distant data centers. They thus depend on their first-mile networks’ willingness to peer effectively with backbone and last-mile networks. Yet campuses are rarely major customers of their ISPs, and therefore have little leverage to influence ISPs’ backbone and peering choices. Alternatively, campuses can in theory use their existing connections to R&E networks to deliver instruction. But this is only possible if those R&E networks peer directly and capably with key backbone and last-mile providers. R&E networks generally have not done this.

Here’s what this all means: Higher education needs to continue supporting its historical positions promoting last-mile neutrality and seeking private-network exemptions for campus networks. But colleges and universities also need to work together to make sure their instructional traffic will continue to reach distant students. One way to achieve this is by opposing paid prioritization, of course. But FCC and other regulations may permit limited paid prioritization, or technology may as usual stay one step ahead of regulation. Higher education must figure out the best ways to deal with that, and collaborate to make them so.

 

 

 

 

Notes From (or is it To?) the Dark Side

“Why are you at NBC?,” people ask. “What are you doing over there?,” too, and “Is it different on the dark side?” A year into the gig seems a good time to think about those. Especially that “dark side” metaphor.  For example, which side is “dark”?

This is a longer-than-usual post. I’ll take up the questions in order: first Why, then What, then Different; use the links to skip ahead if you prefer.

Why are you at NBC?

5675955This is the first time I’ve worked at a for-profit company since, let’s see, the summer of 1967: an MIT alumnus arranged an undergraduate summer job at Honeywell‘s Mexico City facility. Part of that summer I learned a great deal about the configuration and construction of custom control panels, especially for big production lines. I think of this every time I see photos of big control panels, such as those at older nuclear plants—I recognize the switch types, those square toggle buttons that light up. (Another part of the summer, after the guy who hired me left and no one could figure out what I should do, I made a 43½-foot paper-clip chain.)

One nice Honeywell perk was an employee discount on a Pentax 35mm SLR with a 40mm and 135mm lenses, which I still have in a box somewhere, and which still works when I replace the camera’s light-meter battery. (The Pentax brand belonged to Honeywell back then, not Ricoh.) Excellent camera, served me well for years, through two darkrooms and a lot of Tri-X film. I haven’t used it since I began taking digital photos, though.

5499942818_d3d9e9929b_nI digress. Except, it strikes me, not really. One interesting thing about digital photos, especially if you store them online and make most of them publicly visible (like this one, taken on the rim of spectacular Bryce Canyon, from my Backdrops collection), is that sometimes the people who find your pictures download them and use them for their own purposes. My photos carry a Creative Commons license specifying that although they are my intellectual property, they can be used for nonprofit purposes so long as they are attributed to me (an option not available, apparently, if I post them on Facebook instead).

So long as those who use my photos comply with the CC license requirement, I don’t require that they tell me, although now and then they do. But if people want to use one of my photos commercially, they’re supposed to ask my permission, and I can ask for a use fee. No one has done that for me—I’m keeping the day job—but it’s happened for our son.

dmcaI hadn’t thought much about copyright, permissions, and licensing for personal photos (as opposed to archival, commercial, or institutional ones) back when I first began dealing with “takedown notices” sent to the University of Chicago under the Digital Millennium Copyright Act (DMCA). There didn’t seem to be much of a parallel between commercialized intellectual property, like the music tracks that accounted for most early DMCA notices, and my photos, which I was putting online mostly because it was fun to share them.

Neither did I think about either photos or music while serving on a faculty committee rewriting the University’s Statute 18, the provision governing patents in the University’s founding documents.

sealThe issues for the committee were fundamentally two, both driven somewhat by the evolution of “textbooks”.

First, where is the line between faculty inventions, which belong to the University (or did at the time), and creations, which belong to creators—between patentable inventions and copyrightable creations, in other words? This was an issue because textbooks had always been treated as creations, but many textbooks had come to include software (back then, CDs tucked into the back cover), and software had always been treated as an invention.

Second, who owns intellectual property that grows out of the instructional process? Traditionally, the rights and revenues associated with textbooks, even textbooks based on University classes, belonged entirely to faculty members. But some faculty members were extrapolating this tradition to cover other class-based material, such as videos of lectures. They were personally selling those materials and the associated rights to outside entities, some of which were in effect competitors (in some cases, they were other universities!).

fathomAs you can see by reading the current Statute 18, the faculty committee really didn’t resolve any of this. Gradually, though, it came to be understood  that textbooks, even textbooks including software, were still faculty intellectual property, whereas instructional material other than that explicitly included in traditional textbooks was the University’s to exploit, sell, or license.

With the latter well established, the University joined Fathom, one of the early efforts to commercialize online instructional material, and put together some excellent online materials. Unfortunately, Fathom, like its first-generation peers, failed to generate revenues exceeding its costs. Once it blew through its venture capital, which had mostly come from Columbia University, Fathom folded. (Poetic justice: so did one of the profit-making institutions whose use of University teaching materials prompted the Statute 18 review.)

Gradually this all got me interested in the thicket of issues surrounding campus online distribution and use of copyrighted materials and other intellectual property, and especially the messy question how campuses should think about copyright infringement occurring within and distributed from their networks. The DMCA had established the dual principles that (a) network operators, including campuses, could be held liable for infringement by their network users, but (b) they could escape this liability (find “safe harbor”) by responding appropriately to complaints from copyright holders. Several of us research-university CIOs worked together to develop efficient mechanisms for handling and responding to DMCA notices, and to help the industry understand those and the limits on what they might expect campuses to do.

heoaAs one byproduct of that, I found myself testifying before a Congressional committee. As another, I found myself negotiating with the entertainment industry, under US Education Department auspices, to develop regulations implementing the so-called “peer to peer” provisions of the Higher Education Opportunity Act of 2008.

That was one of several threads that led to my joining EDUCAUSE in 2009. One of several initiatives in the Policy group was to build better, more open communications between higher education and the entertainment industry with regard to copyright infringement, DMCA, and the HEOA requirements.

hero-logo-edxI didn’t think at the time about how this might interact with EDUCAUSE’s then-parallel efforts to illuminate policy issues around online and nontraditional education, but there are important relevancies. Through massively open online courses (MOOCs) and other mechanisms, colleges and universities are using the Internet to reach distant students, first to build awareness (in which case it’s okay for what they provide to be freely available) but eventually to find new revenues, that is, to monetize their intellectual property (in which case it isn’t).

music-industryIf online campus content is to be sold rather than given away, then campuses face the same issues as the entertainment industry: They must protect their content from those who would use it without permission, and take appropriate action to deter or address infringement.

Campuses are generally happy to make their research freely available (except perhaps for inventions), as UChicago’s Statute 18 makes clear, provided that researchers are properly credited. (I also served on UChicago’s faculty Intellectual Property Committee, which among other things adjudicated who-gets-credit conflicts among faculty and other researchers.) But instruction is another matter altogether. If campuses don’t take this seriously, I’m afraid, then as goes music, so goes online higher education.

Much as campus tumult and changes in the late Sixties led me to abandon engineering for policy analysis, and quantitative policy analysis led me into large-scale data analysis, and large-scale data analysis led me into IT, and IT led me back into policy analysis, intellectual-property issues led me to NBCUniversal.

Peacock_CleanupI’d liked the people I met during the HEOA negotiations, and the company seemed seriously committed to rethinking its relationships with higher education. I thought it would be interesting, at this stage in my career, to do something very different in a different kind of place. Plus, less travel (see screwup #3 in my 2007 EDUCAUSE award address).

So here I am, with an office amidst lobbyists and others who focus on legislation and regulation, with a Peacock ID card that gets me into the Universal lot, WRC-TV, and 30 Rock (but not SNL), and with a 401k instead of a 403b.

What are you doing over there?

NBCUniversal’s goals for higher education are relatively simple. First, it would like students to use legitimate sources to get online content more, and illegitimate “pirate” sources less. Second, it would like campuses to reduce the volume of infringing material made available from their networks to illegal downloaders worldwide.

477px-CopyrightpiratesMy roles are also two. First, there’s eagerness among my colleagues (and their counterparts in other studios) to better understand higher education, and how campuses might think about issues and initiatives. Second, the company clearly wants to change its approach to higher education, but doesn’t know what approaches might make sense. Apparently I can help with both.

To lay foundation for specific projects—five so far, which I’ll describe briefly below—I looked at data from DMCA takedown notices.

Curiously, it turned out, no one had done much to analyze detected infringement from campus networks (as measured by DMCA notices sent to them), or to delve into the ethical puzzle: Why do students behave one way with regard to misappropriating music, movies, and TV shows, and very different ways with regard to arguably similar options such as shoplifting or plagiarism? I’ve written about some of the underlying policy issues in Story of S, but here I decided to focus first on detected infringement.

riaa-logoIt turns out that virtually all takedown notices for music are sent by the Recording Industry Association of America, RIAA (the Zappa Trust and various other entities send some, but they’re a drop in the bucket).

MPAAMost takedown notices for movies and some for TV are sent by the Motion Picture Association of America, MPAA, on behalf of major studios (again, with some smaller entities such as Lucasfilm wading in separately). NBCUniversal and Fox send out notices involving their movies and TV shows.

sources chartI’ve now analyzed data from the major senders for both a twelve-month period (Nov 2011-Oct 2012) and a more recent two-month period (Feb-Mar 2013). For the more recent period, I obtained very detailed data on each of 40,000 or so notices sent to campuses. Here are some observations from the data:

  • Almost all the notices went to 4-year campuses that have at least 100 dormitory beds (according to IPEDS). To a modest extent, the bigger the campus the more notices, but the correlation isn’t especially large.
  • Over half of all campuses—even of campuses with dorms—didn’t get any notices. To some extent this is because there are lots and lots of very small campuses, and they fly under the infringement-detection radar. But I’ve learned from talking to a fair number of campuses that, much to my surprise, many heavily filter or even block peer-to-peer traffic at their commodity Internet border firewall—usually because the commodity bandwidth p2p uses is expensive, especially for movies, rather than to deal with infringement per se. Outsourced dorm networks also have an effect, but I don’t think they’re sufficiently widespread yet to explain the data.
  • Several campuses have out-of-date or incorrect “DMCA agent” addresses registered at the Library of Congress. Compounding that, it turns out some notice senders use “abuse” or other standard DNS addresses rather than the registered agent addresses.
  • Among campuses that received notices, a few campuses stand out for receiving the lion’s share, even adjusting for their enrollment. For example, the top 100 or so recipient campuses got about three quarters of the total, and a handful of campuses stand out sharply even within that group: the top three campuses (the leftmost blue bars in the graph below) accounted for well over 10% of the notices. (I found the same skewness in the 2012 study.) With a few interesting exceptions (interesting because I know or suspect what changed), the high-notice groups have been the same for the two periods.

utorrent-facebook-mark-850-transparentThe detection process, in general, is that copyright holders choose a list of music, movie, or TV titles they believe likely to be infringed. Their contractors then use BitTorrent tracker sites and other user tools to find illicit sources for those titles.

For the most part the studios and associations simply look for titles that are currently popular in theaters or from legitimate sources. It’s hard to see that process introducing a bias that would affect some campuses so much differently than others. I’ve also spent considerable time looking at how a couple of contractors verify that titles being offered illicitly (that is, listed for download on a BitTorrent tracker site such as The Pirate Bay) are actually the titles being supplied (rather than, say, malware, advertising, or porn), and at how they figure out where to send the resulting takedown notices. That process too seems pretty straightforward and unbiased.

argo-15355-1920x1200Sender choices clearly can influence how notice counts vary from time to time: for example, adding a newly popular title to the search list can lead to a jump in detections and hence notices. But it’s hard to see how the choice of titles would influence how notice counts vary from institution to institution.

This all leads me to believe that takedown notices tell us something incomplete but useful about campus policies and practices, especially at the extremes. The analysis led directly to two projects focused on specific groups of campuses, and indirectly to three others.

Role Model Campuses

Based on the results of the data analysis, I communicated individually with CIOs at 22 campuses that received some but relatively few notices: specifically, campuses that (a) received at least one notice (and so are on the radar) but (b) fewer than 300 and fewer than 20 per thousand student headcount, (c) have at least 7,500 headcount students, and (d) have at least 10,000 dorm beds (per IPEDS) or sufficient dorm beds to house half your headcount. (These are Group 4, the purple bars in the graph below. The solid bars represent total notices sent, and the hollow bars represent incidence, or notices per thousand headcount students. Click on the graph to see it larger.)

I’ve asked each of those campuses whether they’d be willing to document their practices in an open “role models” database developed jointly by the campuses and hosted by a third party such as groups charta higher-education association (as EDUCAUSE did after the HEOA regulations took effect). The idea is to make a collection of diverse effective practices available to other campuses that might want to enhance their practices.

High Volume Campuses

Separately, I communicated privately with CIOs at 13 campuses that received exceptionally many notices, even adjusting for their enrollment (Group 1, the blue bars in the graph). I’ve looked in some detail at the data for those campuses, some large and some small, and in some cases that’s led to suggestions.

For example, in a few cases I discovered that virtually all of a high-volume campus’s notices were split evenly among a small number of consecutive IP addresses. In those cases, I’ve suggested that those IP addresses might be the front-end to something like a campus wireless network. Filtering or blocking p2p (or just BitTorrent) traffic on those few IP addresses (or the associated network devices) might well shrink the campus’s role as a distributor without affecting legitimate p2p or BitTorrent users (who tend to be managing servers with static addresses).

Symposia

Back when I was at EDUCAUSE, we worked with NBCUniversal to host a DC meeting between senior campus staff from a score of campuses nationwide and some industry staff closely involved with the detection and notification for online infringement. The meeting was energetic and frank, and participants from both sides went away with a better sense of the other’s bona fides and seriousness. This was the first time campus staff had gotten a close look at the takedown-notice process since a Common Solutions Group meeting in Ann Arbor some years earlier; back then the industry’s practices were much less refined.

university-st-thomas-logo-white croppedBased on the NBCUniversal/EDUCAUSE experience, we’re organizing a series of regional “Symposia” along these lines on campuses in various cities across the US. The objectives are to open new lines of communication and to build trust. The invitees are IT and student-affairs staff from local campuses, plus several representatives from industry, especially the groups that actually search for infringement on the Internet. The first was in New York, the second in Minneapolis, the third will be in Philadelphia, and others will follow in the West, the South, and elsewhere in the Midwest.

Research

We’re funding a study within a major state university system to gather two kinds of data. Initially the researchers are asking each campus to describe the measures it takes to “effectively combat” copyright infringement: its communications with students, its policies for dealing with violations, and the technologies it uses. The data from the first phase will help enhance a matrix we’ve drafted outlining the different approaches taken by different campuses, complementing what will emerge from the “role models” project.

Based on the initial data, the researchers and NBCUniversal will choose two campuses to participate in the pilot phase of the Campus Online Education Initiative (which I’ll describe next). In advance of that pilot, the researchers will gather data from a sample of students on each campus, asking about their attitudes toward and use of illicit and legitimate online sources for music, movies, and video. They’ll then repeat that data collection after the pilot term.

Campus Online Entertainment Initiative

Last but least in neither ambition nor complexity, we’re crafting a program that will attempt to address both goals I listed earlier: encouraging campuses to take effective steps to reduce distribution of infringing material from their networks, and helping students to appreciate (and eventually prefer) legitimate sources for online entertainment.

maxresdefaultWorking with Universal Studios and some of its peers, we’ll encourage students on participating campuses to use legitimate sources by making a wealth of material available coherently and attractively—through a single source that works across diverse devices, and at a substantial discount or with similar incentives.

Participating campuses, in turn, will maintain or implement policies and practices likely to shrink the volume of infringing material available from their networks. In some cases the participating campuses will already be like those in the “role models” group; in others they’ll be “high volume” or other campuses willing to  adopt more effective practices.

I’m managing these projects from NBCUniversal’s Washington offices, but with substantial collaboration from company colleagues here, in Los Angeles, and in New York; from Comcast colleagues in Philadelphia; and from people in other companies. Interestingly, and to my surprise, pulling this all together has been much like managing projects at a research university. That’s a good segue to the next question.

Is it different on the dark side?

IMG_1224Newly hired, I go out to WRC, the local NBC affiliate in Washington, to get my NBCUniversal ID and to go through HR orientation. Initially it’s all familiar: the same ID photo technology, the same RFID keycard, the same ugly tile and paint on the hallways, the same tax forms to be completed by hand.

But wait: Employee Relations is next door to the (now defunct) Chris Matthews Show. And the benefits part of orientation is a video hosted by Jimmy Fallon and Brian Williams. And there’s the possibility of something called a “bonus”, whatever that is.

Around my new office, in a spiffy modern building at 300 New Jersey Avenue, everyone seems to have two screens. That’s just as it was in higher-education IT. But wait: here one of them is a TV. People watch TV all day as they work.

Toto, we’re not in higher education any more.

IMG_1274It’s different over here, and not just because there’s a beautiful view of the Capitol from our conference rooms. Certain organizational functions seem to work better, perhaps because they should and in the corporate environment can be implemented by decree: HR processes, a good unified travel arrangement and expense system, catering, office management. Others don’t: there’s something slightly out of date about the office IT, especially the central/individual balance and security, and there’s an awful lot of paper.

Some things are just different, rather than better or not: the culture is heavily oriented to face-to-face and telephone interaction, even though it’s a widely distributed organization where most people are at their desks most of the time. There’s remarkably little email, and surprisingly little use of workstation-based videoconferencing. People dress a bit differently (a maitre d’ told me, “that’s not a Washington tie”).

But differences notwithstanding, mostly things feel much the same as they did at EDUCAUSE, UChicago, and MIT.

tiny NBCUniversal_violet_1030Where I work is generally happy, people talk to one another, gossip a bit, have pizza on Thursdays, complain about the quality of coffee, and are in and out a lot. It’s not an operational group, and so there’s not the bustle that comes with that, but it’s definitely busy (especially with everyone around me working on the Comcast/Time Warner merger). The place is teamly, in that people work with one another based on what’s right substantively, and rarely appeal to authority to reach decisions. Who trusts whom seems at least as important as who outranks whom, or whose boss is more powerful. Conversely, it’s often hard to figure out exactly how to get something done, and lots of effort goes into following interpersonal networks. That’s all very familiar.

MIT_Building_10_and_the_Great_Dome,_Cambridge_MAI’d never realized how much like a research university a modern corporation can be. Where I work is NBCUniversal, which is the overarching corporate umbrella (“Old Main”, “Mass Hall”, “Building 10”, “California Hall”, “Boulder”) for 18 other companies including news, entertainment, Universal Studios, theme parks, the Golf Channel, Telemundo (which are remarkably like schools and departments in their varied autonomy).

Meanwhile NBCUniversal is owned by Comcast—think “System Central Office”. Sure, these are all corporate entities, and they have concrete metrics by which to measure success: revenue, profit, subscribers, viewership, market share. But the relationships among organizations, activities, and outcomes aren’t as coherent and unitary as I’d expected.

Dark or Green?

So, am I on the dark side, or have I left it behind for greener pastures? Curiously, I hear both from my friends and colleagues in higher education: Some of them think my move is interesting and logical, some think it odd and disappointing. Curioser still, I hear both from my new colleagues in the industry: Some think I was lucky to have worked all those decades in higher education, while others think I’m lucky to have escaped. None of those views seems quite right, and none seems quite wrong.

The point, I suppose, is that simple judgments like “dark” and “greener” underrepresent the complexity of organizational and individual value, effectiveness, and life. Broad-brush characterizations, especially characterizations embodying the ecological fallacy, “…the impulse to apply group or societal level characteristics onto individuals within that group,” do none of us any good.

It’s so easy to fall into the ecological-fallacy trap; so important, if we’re to make collective progress, not to.

Comments or questions? Write me: greg@gjackson.us

(The quote is from Charles Ess & Fay Sudweeks, Culture, technology, communication: towards an intercultural global village, SUNY Press 2001, p 90. Everything in this post, and for that matter all my posts, represents my own views, not those of my current or past employers, or of anyone else.)

3|5|2014 11:44a est

The Rock, and The Hard Place

Looking into the near-term future—say, between now and 2020—we in higher-education IT have to address two big challenges. Neither admits easy progress. But if we don’t address them, we’ll find ourselves caught between a rock and a hard place.

  • The first challenge, the rock, is to deliver high-quality, effective e-learning and curriculum at scale. We know how to do part of that, but key pieces are missing, and it’s not clear how will find them.
  • The second challenge, the hard place, is to recognize that enterprise cloud services and personal devices will make campus-based IT operations the last rather than the first resort. This means everything about our IT base, from infrastructure through support, will be changing just as we need to rely on it.

“But wait,” I can hear my generation of IT leaders (and maybe the next) say, “aren’t we already meeting those challenges?”

If we compare today’s e-learning and enterprise IT with that of the recent past, those leaders might rightly suggest, immense change is evident:

  • Learning management systems, electronic reserves, video jukeboxes, collaboration environments, streamed and recorded video lectures, online tutors—none were common even in 2000, and they’re commonplace today.
  • Commercial administrative systems, virtualized servers, corporate-style email, web front ends—ditto.

That’s progress and achievement we all recognize, applaud, and celebrate. But that progress and achievement overcame past challenges. We can’t rest on our laurels.

We’re not yet meeting the two broad future challenges, I believe, because in each case fundamental and hard-to-predict change lies ahead. The progress we’ve made so far, however progressive and effective, won’t steer us between the rock of e-learning and the hard place of enterprise IT.

The fundamental change that lies ahead for e-learning
is the the transition from campus-based to distance education

Back in the 1990s, Cliff Adelman, then at the US Department of Education, did a pioneering study of student “swirl,” that is, students moving through several institutions, perhaps with work intervals along the way,before earning degrees.

“The proportion of undergraduate students attending more than one institution,” he wrote, “swelled from 40 percent to 54 percent … during the 1970s and 1980s, with even more dramatic increases in the proportion of students attending more than two institutions.” Adelman predicted that “…we will easily surpass a 60 percent multi-institutional attendance rate by the year 2000.”

Moving from campus to campus for classes is one step; taking classes at home is the next. And so distance education, long constrained by the slow pace and awkward pedagogy of correspondence courses, has come into its own. At first it was relegated to “nontraditional” or “experimental” institutions—Empire State College, Western Governors University, UNext/Cardean (a cautionary tale for another day), Kaplan. Then it went mainstream.

At first this didn’t work: fathom.com, for example, a collaboration among several first-tier research universities led by Columbia, found no market for its high-quality online offerings. (Its Executive Director has just written a thoughtful essay on MOOCs, drawing on her fathom.com experience.)

Today, though, a great many traditional colleges and universities successfully bring instruction and degree programs to distant students. Within the recent past these traditional institutions have expanded into non-degree efforts like OpenCourseWare and to broadcast efforts like the MOOC-based Coursera and edX. In 2008, 3.7% of students took all their coursework through distance education, and 20.4% took at least one class that way.

Learning management systems, electronic reserves, video jukeboxes, collaboration environments, streamed and recorded video lectures, online tutors, the innovations that helped us overcome past challenges—little of that progress was designed for swirling students who do not set foot on campus.

We know how to deliver effective instruction to motivated students at a distance. Among policy issues we have yet to resolve, we don’t yet know how to

  • confirm their identity,
  • assess their readiness,
  • guide their progress,
  • measure their achievement,
  • standardize course content,
  • construct and validate curriculum across diverse campuses, or
  • certify degree attainment

in this imminent world. Those aren’t just IT problems, of course. But solving them will almost certainly challenge IT.

The fundamental change that lies ahead for enterprise technologies
is the transition from campus IT to cloud and personal IT

The locus of control over all three principal elements of campus IT—servers and services, networks, and end-user devices and applications—is shifting rapidly from the institution to customers and third parties.

As recently as ten years ago, most campus IT services, everything from administrative systems through messaging and telephone systems to research technologies, were provided by campus entities using campus-based facilities, sometimes centralized and sometimes not. The same was true for the wired and then wireless networks that provided access to services, and for the desktop and laptop computers faculty, students, and staff used.

Today shared services are migrating rapidly to servers and systems that reside physically and organizationally elsewhere—the “cloud”—and the same is happening for dedicated services such as research computing. It’s also happening for networks, as carrier-provided cellular technologies compete with campus-provided wired and WiFi networking, and for end-user devices, as highly mobile personal tablets and phones supplant desktop and laptop computers.

As I wrote in an earlier post about “Enterprise IT,” the scale of enterprise infrastructure and services within IT and the shift in their locus of control have major implications for and the organizations that have provided it. Campus IT organizations grew up around locally-designed services running on campus-owned equipment managed by internal staff. Organization, staffing, and even funding models ensued accordingly. Even in academic computing and user support, “heavy metal” experience was valued highly. The shifting locus of control makes other skills at least as valuable: the ability to negotiate with suppliers, to engage effectively with customers (indeed, to think of them as “customers” rather than “users”), to manage spending and investments under constraint, to explain.

To be sure, IT organizations still require highly skilled technical staff, for example to fine-tune high-performance computing and networking, to ensure that information is kept secure, to integrate systems efficiently, and to identify and authenticate individuals remotely. But these technologies differ greatly from traditional heavy metal, and so must enterprise IT.

The rock, IT, and the hard place

In the long run, it seems to me that the campus IT organization must evolve rapidly to center on seven core activities.

Two of those are substantive:

  • making sure that researchers have the technologies they need, and
  • making sure that teaching and learning benefit from the best thinking about IT applications and effectiveness.

Four others are more general:

  • negotiating and overseeing relationships with outside providers;
  • specifying or doing what is necessary for robust integration among outside and internal services;
  • striking the right personal/institutional balance between security and privacy for networks, systems, and data; and last but not least
  • providing support to customers (both individuals and partner entities).

The seventh core activity, which should diminish over time, is

  • operating and supporting legacy systems.

Creative, energetic, competent staff are sine qua non for achieving that kind of forward-looking organization. It’s very hard to do good IT without good, dedicated people, and those are increasingly difficult to find and keep. Not least, this is because colleges and universities compete poorly with the stock options, pay, glitz, and technology the private sector can offer. Therein lies another challenge: promoting loyalty and high morale among staff who know they could be making more elsewhere.

To the extent the rock of e-learning and the hard place of enterprise IT frame our future, we not only need to rethink our organizations and what they do; we also need to rethink how we prepare, promote, and choose leaders for higher-education leaders on campus and elsewhere—the topic, fortuitously, of a recent ECAR report, and of widespread rethinking within EDUCAUSE.

We’ve been through this before, and risen to the challenge.

  • Starting around 1980, minicomputers and then personal computers brought IT out of the data center and into every corner of higher education, changing data center, IT organization, and campus in ways we could not even imagine.
  • Then in the 1990s campus, regional, and national networks connected everything, with similarly widespread consequences.

We can rise to the challenges again, too, but only if we understand their timing and the transformative implications.

The Ghost is Ready, but the Meat is Raw

Old joke. Someone writes a computer program (creates an app?) that translates from English into Russian (say) and vice versa. Works fine on simple stuff, so the next test is a a bit harder: “the spirit is willing, but the flesh is weak.”  The program/app translates the phrase into Russian, then the tester takes the result, feeds it back into the program/app, and translates it back into English. Result: “The ghost is ready, but the meat is raw.”

(The starting phrase is from Matthew 26:41 – the King James version has “indeed” before “willing”, ASV doesn’t, and weirdly enough, if you try this in Google Translate, the joke falls flat, because you get an accurate translation to Russian and back, except for some reason you end up with an extra “indeed” in the final version. It’s almost as though Google Translate has figured out where the quotation came from, and then substituted the King James version for the ASV one, but not quite correctly. Spooky. But I digress.)

Old joke, yes. Tired, even. But, as usual, it’s a metaphor, in this case for a problem that will only become larger as higher education outsources or contracts for ever more of its activity: we think we’ve doing the right thing when we contract with outside providers, but the actual effect of the contract, once it takes effect, isn’t quite what we expected. If we’re lucky, we figure this out before we’re irrevocably committed. If we’re unlucky, we box ourselves in.

Two examples.

1. Microsoft Site Licensing

About a decade ago, several of us were at an Internet2 meeting. A senior Microsoft manager spoke about relations with higher education (although looking back, I can’t see why Microsoft would present at I2. Maybe it wasn’t an I2 meeting, but let’s just say it was — never let truth get in the way of a good story). At the time, instead of buying a copy of Office for each computer, as Microsoft licenses required, many students, staff, and faculty simply installed Microsoft Office on multiple machines from one purchased copy — or even copied the installation disks and passed them around. That may save money, but it’s copyright infringement, and illegal.

Microsoft’s response to this problem had been threefold:

  • it began incorporating copy protection and other digital-rights-management (DRM) mechanisms into its installation media so that they couldn’t be copied,
  • it began berating campuses for tolerating the illegal copying (and in some cases attempted to audit compliance with licenses by searching campus computers for illegally obtained software), and
  • it sought to centralize campus procurement of Microsoft software by tailoring and refining its so-called “Select” volume-discount program to encourage campuses to license software campus-wide.

Problem was, the “Select” agreement required campuses to count how many copies of software they licensed, and to maintain records that would enable Microsoft to determine whether each installed copy on campus was properly licensed. This entailed elaborate bookkeeping and tracking mechanisms, exposed campuses to audit risk, and its costs into the future were unpredictable. The volume-discount “Select” program was clearly a step forward, but it fell far short of actually appealing to campuses.

So the several of us in the Internet2 session (or wherever it was) took the Microsoft manager aside afterwards, told him Microsoft needed a more attractive licensing model for campuses, and suggested what that might be.

To our surprise, Microsoft followed up, and the rump-group discussions evolved into the initial version of the Microsoft Campus Agreement. The Campus Agreement (since replaced by Enrollment for Education Solutions, EES) was a true site license: glossing over some complexities and details, its general terms were that campuses would pay Microsoft based on their size and the number of different products they wished to license, and in return would be permitted to use as many copies of those products as they liked.

Most important from the campus perspective, the Campus Agreement included no requirement to track or count individual copies of the licensed products, thereby making all copies legal; in fact, campuses could make their own copies of installation media. Most important from the Microsoft perspective, Campus Agreement pricing was set so that the typical campus would still pay Microsoft about as much as Microsoft had been receiving from that campus’s central or departmental users for Select or individual copies; that is, Micorsoft’s revenue from campuses would not decline.

The Campus Agreement did entail a fundamental change that was less appealing. In effect, campuses were paying to rent software, with Microsoft agreeing to provide updates at no additional cost, rather than campuses buying copies and then periodically paying to update them. Although it included a few other lines, for the most part the Campus Agreement covered Microsoft’s operating-system and Office products.

Win-win, right? Lots of campuses signed up for the Campus Agreement. It largely eliminated talk about “piracy” of MS-Office products in higher education (enhanced DRM played an important role in this too), and it stabilized costs for most Microsoft client software. It was very popular with students, faculty, and staff, especially since the Campus Agreement allowed institutionally-provided software to be installed on home computers.

But at least one campus, which I’ll call Pi University, balked. The Campus Agreement, PiU’s golf-loving CIO pointed out, had a provision no one had read carefully: if PiU withdrew from the Campus Agreement, he said, it might be required to list and pay for all the software copies that PiU or its students, faculty, and staff had acquired under the Campus Agreement — that is, to buy what it had been renting. The PiU CIO said that he had no way to comply with such a provision, and that therefore PiU could not in good faith sign an agreement that included it.

Some of us thought the PiU CIO’s point was valid but inconsequential. First, some of us didn’t believe that Microsoft would ever enforce the buy-what-you’d-rented clause, so that it presented little actual risk. Second, some of us pointed out that since there was no requirement that campuses document how many copies they distributed, and in general the distribution would be independent of Microsoft, a campus leaving the Campus Agreement could simply cite any arbitrary number of copies as the basis for its exit payment. Therefore, even if Microsoft enforced the clause, estimating the associated payment was entirely under the campus’s control. Those of who believed these arguments went forward with the Campus Agreement; Pi University didn’t.

So the ghost was ready (higher education had gotten most of what it wanted), but the meat was raw (what we wanted turned out problematic in ways no one had really thought through).

Now let’s turn to a more current case.

2. Outsourcing Campus Bookstores

In February 2012 EDUCAUSE agreed to work with Internet2 on an electronic textbooks pilot. This was to be the third in a series of pilots: Indiana University had undertaken one for the fall of 2011, it and a few other campuses had worked with Internet2 on second proof-of-concept pilot for the spring of 2012, and the third pilot was to include a broader array of  institutions.

Driving these efforts were the observations that textbook prices figured prominently in spiraling out-of-pocket college-attendance costs, that electronic textbooks might help attenuate those prices, and that electronic textbooks also might enable campuses to move from individual student purchases to more efficient site licenses, perhaps bypassing unnecessary intermediaries.

A small team planned the pilot, and began soliciting participation in mid-March. By April 7, the initial deadline, 70 institutions had expressed interest. Over 100 people joined an informational webinar two days later, and it looks as though about 25 institutions will manage to participate and help higher education, publishers, and e-reader providers understand their joint future better.

The ghost/meat example here isn’t the etext pilot itself. Rather, it’s something that caused many interested institutions to withdraw from the pilot: campus bookstore outsourcing.

According to the National Association of College Stores (NACS), there are about 4500 bookstores serving US higher education (probably coincidentally, that’s about the number of degree-granting institutions in the US, of which about two thirds are nonspecialized institutions enrolling more than just a few students). Many stores counted by NACS are simply stores near campuses rather than located on or formally associated with them.

Of the campus-located, campus-associated stores, over 820 are operated under outsourcing contracts by Follett Higher Education Group and about 600 are operated by Barnes & Noble College Booksellers. Another 140 stores are members of the Independent College Bookstore Association (ICBA), and the remainder — I can’t find a good count — are either independent, campus-operated, or operated by some other entity.

The arrangements for outsourced bookstores vary from campus to campus, but they have some features in common. The most prominent of those is the overall deal, which is generally that in return for some degree of exclusivity or special access granted by the campus, the store pays the campus a fee of some kind. The exclusivity or special access may be confined to textbook adoptions, or it may extend to clothing and other items with the campus logo or to computer hardware and software. The payment to the campus may be negotiated explicitly, or it may be a percentage of sales or profit. Some outsourced stores are in campus-owned buildings and pay rent, some own a building part of which is rented to campus offices or activities, and some are freestanding; the associated space payments further complicate the relationship between outsourced stores and campuses but do not change its fundamental dependence on the exchange of exclusivity for fees.

For the most part outsourcing bookstores seems to serve campuses well. Managing orders, inventories, sales, and returns for textbooks and insignia items requires skill and experience with high-volume, low-margin retail, which campus administrators rarely have. Moreover, until recently bookstore operations generally had little impact on campus operations and vice versa.

Because bookstore operations generally stood apart from academic and programmatic activities on campus, negotiating contracts with bookstores generally emphasized “business” issues. Since these for the most part involved money and space, negotiations and contract approvals often remained on the “business” side of campus administration, along with apparently similar issues like dining halls, fleet maintenance, janitorial service, lab supply, and so forth. Again, this served campuses well: the campus administrators most attuned to operations and finance (chief finance officers, chief administrative officers, heads of auxiliary services) were the right ones to address bookstore issues.

Over the past few years this changed, first gradually and then more abruptly.

  • First, having bookstores handle hardware and software sales to students (and in some cases departments) came into conflict with campus desires to guide individual choices and maximize support efficiency through standardization and incentives, none of which aligned well with bookstores’ need to maximize profit from IT sales — an important goal, with campus bookstore sales essentially flat since 2005-2006 despite 10%+ enrollment growth.
  • Second, the high price of textbooks drew attention as a major component of growing college costs, and campuses sought to regain some control over it – NACS reports that the average student spends $483 on texts and related materials, that the average textbook price rose from $56 in 2006-2007 to $62 in 2009-2010, and that the typical margin on textbooks is about 22% for new texts and 35% for used ones.
  • Third, as textbooks have begun to migrate from static paper volumes to interactive electronic form, they have come to resemble software more than sweatshirts in that individual student purchases through bookstores may not be the optimal way to distribute or procure them.

That last point — that bookstores may not be the right medium for selling and buying textbooks — potentially threatens the traditional bookstore model, and therefore the outsourcing industry based on it. Not surprisingly, bookstores have responded aggressively to this threat, both offensively and defensively. On the offensive front (I mean this in the sense “trying to advance”, rather than “trying to offend”), the major bookstore chains have invested in e-reader technology, and have begun experimenting extensively with alternative pricing and delivery models. On the defensive front, they have tried to extend past exclusivity clauses to include electronic texts and other new materials.

Many campuses expressed interest in the EDUCAUSE/Internet2 EText Pilot, going so far as to add themselves to a list, make preliminary commitments, and attend the webinar. Filled with enthusiasm, many webinar attendees began talking up the pilot on their campuses, and many of them then ran into a wall: they learned, often only when they double-checked with their counsel in the final stages of applying, that their bookstore contracts — Barnes & Noble and Follett both — precluded their participation in even a pilot exploration of alternative etext approaches, since the right to distribute electronic textbooks was reserved exclusively for the outsourced bookstore.

The CIO from one campus — I’ll call it Omega University — discovered that a recent renewal of the bookstore contract provided that during the 15-year term of the contract, “the Bookstore shall be the University’s  …exclusive seller of all required, recommended or suggested course materials, course packs and tools, as well as materials published or distributed electronically, or sold over the Internet.”  The OmegaU CIO was outraged: “In my mind,” he wrote, “the terms exclusive and over the Internet can’t even be in the same sentence!  And to restrict faculty use of technology for next 15 years is just insane.”

If the last decade has taught us anything, it is that the evolutionary cycle for electronic products is very short, requiring near-constant reappraisal of business models, pricing, and partnerships. That someone on campus signed a contract fixing electronic distribution mechanisms for 15 years may be an extreme case, but we’ve learned even from less pernicious cases  that exclusivity arrangements bound to old business models will drastically constrain progress.

And so the ghost’s readiness again yielded raw meat: technological progress translated well-intentioned, longstanding bookstore contracts that had served campuses well into obstacles impeding even the consideration of important changes.

3. So What Do We Do?

It’s important to draw the right inference from all this.

The problem isn’t simply Microsoft trying to lock customers into the Campus Agreement or bookstore operators being avaricious; rather, they’re acting in self-interest, albeit self-interest that in each case is a bit short-sighted.

The compounding problem is that we in higher education often make decisions too narrowly. In the case of the Campus Agreement, we were so focused on the important move from per-copy to site licensing, a major win, that we didn’t pay sufficient negotiating time or effort to the so-called exit clauses — which, in retrospect, could certainly have been written in much less problematic ways still acceptable to Microsoft. In the case of bookstore contracts, we failed to recognize that what had been a distinct, narrow set of activities readily handled within business and finance was being driven by technology into new domains requiring foresight and expertise generally found elsewhere on campus.

Sadly, there’s no simple solution to this problem. It’s hard to take everything into account or involve every possible constituency in a decision and still get it done, and decisions must get done. Perhaps the best solution we can hope for is better, more transparent discussion of both past decisions and future opportunities, so that we learn collectively and openly from our mistakes, take joint responsibility for our shared technological future, and translate accurately back and forth between what we want and what we get.

IT Demography in Higher Education: Some Reminiscence & Speculation

In oversimplified caricature, many colleges and universities have traditionally staffed the line, management, and leadership layers of their IT enterprise thus:

Students with some affinity for technology (perhaps their major, perhaps work-study, perhaps just a side interest) have approached graduation not quite sure what they should do next. They’ve had some contact with the institution’s IT organizations, perhaps having worked for some part of them or perhaps having criticized their services. Whatever the reason, working for an institutional IT organization has seemed a useful way to pay the rent while figuring out what to do next, and it’s been a good deal for the IT organizations because recent graduates are usually pretty clever, know the institution well, learn fast, and are willing to work hard for relatively meager pay.

Moreover, and partly compensating for low pay, the technologies being used and considered in higher education often have been more advanced than those out in business, so sticking around has been a good way to be at the cutting edge technologically, and college and universities have tended to value and reward autonomy, curiosity, and creativity.

Within four or five years of graduation, most staff who come straight into the IT organization have figured out that it’s time to move on. Sometimes a romantic relationship has turned their attention to life plans and long-term earnings, sometimes ambition has taken more focused shape and so they seek a steeper career path, sometimes their interests have sharpened and readied them for graduate school — but in any case, they have left the campus IT organization for other pastures after a few good, productive years, and have been replaced by a new crop of recent graduates.

But a few individuals have found that working in higher education suits their particular hierarchy of needs (to adapt and somewhat distort Maslow). For them, IT work in higher education has yielded several desiderata (remember I’m still caricaturing here): there’s been job security, a stimulating academic environment, a relatively flat organization that offers considerable responsibility and flexibility, and an opportunity to work with and across state-of-the-art (and sometimes even more advanced) technologies. Benefits have been pretty good, even though pay hasn’t and there have been no stock options. Individuals to whom this mix appeals have stayed in campus IT, rising to middle-management levels, sometimes getting degrees in the process, and sometimes, as they have moved into #3 or #2 positions, even moving to other campuses as opportunities present themselves.

Higher-education IT leaders — that is, CIOs, the heads of major decentralized IT organizations, and in some cases the #2s within large central organizations — typically have come from one of two sources. Some have come from within higher-education IT organizations, sometimes the institution’s own but more typically, since a given institution usually has more leadership-ready middle managers than it has available leadership positions, another institution’s. (Whereas insiders once tended to be heavy-metal computer-center directors,  more recently they have come from academic technologies or networking.) Other leaders have come from faculty ranks, often (but not exclusively) in computer science or other technically-oriented disciplines. Occasionally some come from other sources, such as consulting firms or even technology vendors, or even from administration elsewhere in higher education.

The traditional approach staffs IT organizations with well educated, generally clever individuals highly attuned to the institution’s culture and needs. They are willing and able to tackle complex IT projects involving messy integration among different technologies. Those individuals also cost less that comparable ones would if hired from outside. Expected turnover among line staff notwithstanding, they are loyal to the institution even in the face of financial and management challenges.

But the traditional model also tilts IT organizations toward idiosyncrasy and patchwork rather than coherent architecture and efficiency-driven implementation. It often works against the adoption of effective management techniques, and it can promote hostility toward businesslike approaches to procurement and integration and indeed the entire commercial IT marketplace. All of this has been known, but in general institutions have continued to believe that the advantages of the traditional model outweigh its shortcomings.

I saw Moneyball in early October. I liked it mostly because it’s highly entertaining, it’s a good story, it’s well written, acted, directed, and produced, and it involves both applied statistical analysis (which is my training) and baseball (my son’s passion, and mine when the Red Sox are in the playoffs). I also liked it because its focus — dramatic change in how one staffs baseball teams — led me to think about college and university IT staffing. (And yes, I know my principles list says that “all sports analogies mislead”, but never mind.)

In one early scene, the Oakland A’s scouting staff explains to Brad Pitt’s character, Billy Beane, that choosing players depends on intuition honed by decades of experience with how the game is played, and that the approach Beane is proposing — choosing them based on how games are won rather than on intuition — is dangerous and foolhardy. Later, Arliss Howard’s character, the Red Sox owner John Henry, explains that whenever one goes against long tradition all hell breaks loose, and whoever pioneers or even advocates that change is likely to get bloodied.

So now I’ll move from oversimplification and caricature to speculation. To believe in the continued validity of the traditional staffing model may be to emulate the scouts in Moneyball. But to abandon the model is risky, since it’s not clear how higher-education IT can maintain its viability in a more “businesslike” model based on externally defined architectures, service models, and metrics. After all, Billy Beane’s Oakland A’s still haven’t won the World Series.

The Beane-like critique of the traditional model isn’t that the advantage/shortcoming balance has shifted, but rather that it depends on several key assumptions whose future validity is questionable. To cite four interrelated ones:

  • With the increasing sophistication of mobile devices and cloud-based services, the locus of technological innovation has shifted away from colleges and universities. Recent graduates who want to be in the thick of things while figuring out their life plans have much better options than staying on campus — they can intern at big technology firms, or join startups, or even start their own small businesses. In short, there is now competition for young graduates interested IT but unsure of their long-term plans.
  • As campuses have outsourced or standardized much of their IT, jobs that once included development and integration responsibility have evolved into operations, support, and maintenance — which are important, but not very interesting intellectually, and which provide little career development.  Increased outsourcing has exacerbated this, and so has increased reliance on business-based metrics for things like user support and business-based architectures for things like authentication and systems integration.
  • College and university IT departments could once offset this intellectual narrowing because technology prices were dropping faster than available funds, and the resulting financial cushion could be dedicated to providing staff with resources and flexibility to go beyond their specific jobs (okay, maybe what I mean is letting staff buy gadgets and play with them). But tightened attention to productivity and resource constraints have large eliminated the offsetting toys and flexibility. So IT jobs in colleges and universities have lost much of their nonpecuniary attractiveness, without any commensurate increase in compensation. Because of this, line staff are less likely to choose careers in college or university IT, and without this source of replenishment the higher-education IT management layer is aging.
  • As IT has become pervasively important to higher education, so responsibility for its strategic direction has broadened. As strategic direction has broadened, so senior leadership jobs, including the CIO’s, have evolved away from hierarchical control and toward collaboration and influence. (I’ve written about this elsewhere.) At the same time, increasing attention to business-like norms and metrics has required that IT leaders possess a somewhat different skillset than usually emerges from gradual promotion within college and university IT organizations or faculty experience. This has disrupted the supply chain for college and university IT leadership, as a highly fragmented group of headhunter firms competes to identify and recruit nontraditional candidates.

I think we’re already seeing dramatic change resulting from all this. The most obvious change is rapid standardization around commercial standards to enable outsourcing — which is appealing not only intrinsically, but because it reduces dependence on an institution’s own staff. (On the minus side, it also tends to emphasize proprietary commercial rather than open-source or open-standards approaches.) I also sense much greater interest in hiring from outside higher education, both at the line and management levels, and a concomitant reappraisal of compensation levels. That, combined with flat or shrinking resources, is eliminating positions, and the elimination of positions is promoting even more rapid standardization and outsourcing.

On the plus side, this is making college and university IT departments much more efficient and businesslike. On the minus side, higher education IT organizations may be losing their ability to innovate. This is yet another instance of the difficult choice facing us in higher-education IT: Is IT simply an important, central element of educational, research, and administrative infrastructure, or is IT also the vehicle for fundamental change in how higher education works? (In Moneyball, the choice is between player recruitment as a mechanism for generating runs, and as a mechanism for exciting fans. Sure, Red Sox fans want to win. But were they more avid before or after the Curse ended with Bill James’s help?)

If it’s the latter, we need to make sure we’re equipped to enable that — something that neither the traditional model nor the evolving “businesslike” model really does.

 

 

 

Institutional Demography in Higher Education: A Reminder

To understand why policy debates sometimes seem to make no sense, to circle endlessly, or to become bafflingly confused, it’s important to remember that the demography of higher education isn’t politically straightforward. By “demography” I don’t mean Gen X, Gen Y, and echo booms, but rather straightforward counts of degree-granting institutions and students. And by “politically” I don’t mean Republicans and Democrats, but rather the relative importance of different constituencies with different resources and goals.

Here’s a graph (I’ll append a more detailed table at the end). The data come from the National Center for Educational Statistics 2008 IPEDS surveys. They describe the 4,474 public and private degree-granting institutions in the United States, classified into the usual Carnegie categories. I’ve collapsed Carnegie and size categories:  “Small” means enrollment under 2,500, and “Large” means enrollment of 20,000 or more. The categories whose labels I’ve italicized are mostly private, those I’ve underscored are mostly public, and those I’ve both italicized and underscored are split between public and private institutions.

Most of us know some key demographic facts about higher education — for example, that the largest group of students is in 2-year colleges, followed closely by research universities. We also know that an awful lot of commentary and influence in higher education comes from people in or connected with research universities, and therefore many of us have trouble thinking about other kinds of institutions, let alone new kinds.

Here are some things we tend to forget:

  • There really aren’t very many research and doctoral universities — they account for fewer than 10% of institutions even though they enroll over 25% of all students.
  • Although there are lots of big community colleges and they enroll lots of students, not all 2-year colleges are big community colleges; rather, more than half of them are small, and most of those are private.
  • Most small 4-year and master’s institutions are also private, and although they comprise almost 20% of all institutions, they enroll only 5% of all students.
  • There are a lot of specialized institutions — that is, freestanding business, health, medical, engineering, technical, design, theological, and other similar institutions — but they don’t enroll very many students.
  • Enrollment isn’t quite a Pareto distribution (that’s the classic 20-80 rule), but it’s pretty close: 33% of institutions enroll 80% of the students.

What this tells us is that the politics of higher education — and, indeed, the politics of organizations, like my own EDUCAUSE, that try to represent all of higher education — are very different depending on whether we focus on students or on institutions.

If we focus on students, the politics are pretty straightforward. Big community colleges, big master’s institutions, and doctoral and research universities count, and all other institutions don’t. The group that counts is mostly public institutions, so state governments and state system offices also count. Research and doctoral universities employ lots of faculty to whom research is as important as teaching, and who are vocal about its importance, so disciplinary groups and research funders are also relevant.

From the enrollment perspective, how higher education evolves depends critically on what happens in big community colleges and in research and doctoral universities, since that’s where students are. Conversely, unless those institutions adapt to what students expect, we can expect cataclysmic change in higher education.

But community colleges and universities are at opposite ends of the cultural spectrum. To give just two examples, the former rely heavily on faculty hired ad hoc to teach specific courses, the latter rely on tenured or tenure-track faculty, and the former have no interest in research productivity or eminence whereas the latter stake their reputations on it. So the two most important sectors (in the enrollment sense) often are misaligned if not at odds about policy choices, and the changes they contemplate and implement are likely to be divergent rather than synergistic.

The other 3,000 institutions are different. Of the 4,474 institutions, almost 2/3 are private, and almost 2/3 are small, with lots of overlap: just over half of all institutions are small and private. If we focus on institutions rather than enrollment, we attend to a very large number of small, private institutions that often have different missions and challenges, typically do not work together, and, with a few exceptions, are not organized or collectively vocal.

For these institutions, the difference between survival and demise can depend on tiny changes in enrollment or financial aid or even audit policies, since their small size denies them the operating cushions and economies of scale available to their larger and public counterparts. This also means that these institutions have no excess resources to invest in innovation, so they are unlikely to adapt to changing student needs.

In a sense, focusing on enrollment tends to yield interesting and strategic (if conflicting) attention to the future, whereas focusing on institutions tends to balance (if not replace) this with a focus on tactical survival and the intricacies of current policy.

But my point isn’t about specific policy options or imperatives. Rather, it’s this: what’s important varies dramatically depending whether we focus on institutions or students. And that, I think, not only contributes to the complexity of our conversations within the status quo of higher education, but also complicates thinking about its future.

Detailed Table

Degree-Granting Institutions by Type, Control, and Size, IPEDS 2008 Data

 

 

IT and Post-Institutional Higher Education: Will We Still Need Brad When He’s 54?

“There are two possible solutions,” Hercule Poirot says to the assembled suspects in Murder on the Orient Express (that’s p. 304 in the Kindle edition, but the 1974 movie starring Albert Finney is way better than the book, and it and the book are both much better than the abominable 2011 PBS version with David Suchet). “I shall put them both before you,” Poirot continues, “…to judge which solution is the right one.”

So it is for the future role, organization, and leadership of higher-education IT. There are two possible solutions. There’s a reasonably straightforward projection how the role of IT in higher education will evolve into the mid-range future, but there’s also a more complicated one. The first assumes institutional continuity and evolutionary change. The second doesn’t.

IT Domains

How does IT serve higher education? Let me count the ways:

  1. Infrastructure for the transfer and storage of pedagogical, bibliographic, research, operational, and administrative information, in close synergy with other physical infrastructure such as plumbing, wiring, buildings, sensors, controls, roads, and vehicles. This includes not only hardware such as processors, storage, networking, and end-user devices, but also basic functionality such as database management and hosting (or virtualizing) servers.
  2. Administrative systems that manage, analyze, and display the information students, faculty, and staff need to manage their own work and that of their departments. This includes identity management, authentication, and other so-called “middleware” through which institutions define their communities.
  3. Pedagogical applications students and faculty need to enable teaching and learning, including tools for data analysis, bibliography, simulation, writing, multimedia, presentations, discussion, and guidance.
  4. Research tools faculty and students need to advance knowledge, including some tools that also serve pedagogy plus a broad array of devices and systems to measure, gather, simulate, manage, share, distill, analyze, display, and otherwise bring data to bear on scholarly questions.
  5. Community services to support interaction and collaboration, including systems for messaging, collaboration, broadcasting, and socialization both within campuses and across their boundaries.

“…A Suit of Wagon Lit Uniform…and a Pass Key…”

The straightforward projection, analogous to Poirot’s simpler solution (an unknown stranger committed the crime, and escaped undetected), stems from projections how institutions themselves might address each of the IT domains as new services and devices become available, especially cloud-based services and consumer-based end-user devices. The core assumptions are that the important loci of decisions are intra-institutional, and that institutions make their own choices to maximize local benefit (or, in the economic terms I mentioned in an earlier post, to maximize their individual utility.)

Most current thinking in this vein goes something like this:

  • We will outsource generic services, platforms, and storage, and perhaps
  • consolidate and standardize support for core applications and
  • leave users on their own insofar as commercial devices such as phones and tablets are concerned, but
  • we must for the foreseeable future continue to have administrative systems securely dedicated and configured for our unique institutional needs, and similarly
  • we must maintain control over our pedagogical applications and research tools since they help distinguish us from the competition.

Evolution based on this thinking entails dramatic shrinkage in data-center facilities, as virtualized servers housed in or provided by commercial or collective entities replace campus-based hosting of major systems. It entails several key administrative and community-service systems being replaced by standard commercial offerings — for example, the replacement of expense-reimbursement systems by commercial products such as Concur, of dedicated payroll systems by commercial services such as ADP, and of campus messaging, calendaring, and even document-management systems by more general services such as Google’s or Microsoft’s. Finally, thinking like this typically drives consolidation and standardization of user support, bringing departmental support entities into alignment if not under the authority of central IT, and standardizing requirements and services to reduce response times and staff costs.

How might higher-education IT evolve if this is how things go? In particular, what effects would it have on IT organization, and leadership?

One clear consequence of such straightforward evolution is a continuing need for central guidance and management across essentially the current array of IT domains. As I tried to suggest in a recent article, the nature of that guidance and management would change, in that control would give way to collaboration and influence. But institutions would retain responsibility for IT functions, and it would remain important for important systems to be managed or procured centrally for the general good. Although the skills required of the “chief information officer” would be different, CIOs would still be necessary, and most cross-institutional efforts would be mediated through them. Many of those cross-institutional efforts would involve coordinated action of various kinds, ranging from similar approaches to vendors through collective procurement to joint development.

We’d still need Brads.

“Say What You Like, Trial by Jury is a Sound System…”

If we think about the future unconventionally (as Poirot does in his second solution — spoiler in the last section below!), a somewhat more radical, extra-institutional projection emerges. What if Accenture, McKinsey, and Bain are right, and IT contributes very little to the distinctiveness of institutions — in which case colleges and universities have no business doing IT idiosyncratically or even individually?

In that case,

  • we will outsource almost all IT infrastructure, applications, services, and support, either to collective enterprises or to commercial providers, and therefore
  • we will not need data centers or staff, including server administrators, programmers, and administrative-systems technical staff, so that
  • the role of institutional IT will be largely to provide only highly tailored support for research and instruction, which means that
  • in most cases means there will be little to be gained from centralizing IT,
  • it will make sense for academic departments to do their own IT, and
  • we can rely on individual business units to negotiate appropriate administrative systems and services, and so
  • the balance will shift from centralized to decentralized IT organization and staffing.

What if we’re right that mobility, broadband, cloud services, and distance learning are maturing to the point where they can transform education, so that we have simultaneous and similarly radical change on the academic front?

Despite changes in technology and economics, and some organizational evolution, higher education remains largely hierarchical. Vertically-organized colleges and universities grant degrees based on curricula largely determined internally, curricula largely comprise courses offered by the institution, institutions hire their own faculty to teach their own courses, and students enroll as degree candidates in a particular institution to take the courses that institution offers and thereby earn degrees. As Jim March used to point out, higher education today (well, okay, twenty years ago, when I worked with him at Stanford) is pretty similar to its origins: groups sitting around on rocks talking about books they’ve read.

It’s never been that simple, of course. Most students take some of their coursework from other institutions, some transfer from one to another, and since the 1960s there have been examples of network-based teaching. But the model has been remarkably robust across time and borders. It depends critically on the metaphor of the “campus”, the idea that students will be in one place for their studies.

Mobility, broadband, and the cloud redefine “campus” in ways that call the entire model into question, and thereby may transform higher education. A series of challenges lies ahead on this path. If we tackle and overcome these challenges, higher education, perhaps even including its role in research, could change in very fundamental ways.

The first challenge, which is already being widely addressed in colleges, universities, and other entities, is distance education: how to deliver instruction and promote learning effectively at a distance. Some efforts to address this challenge involve extrapolating from current models (many community colleges, “laptop colleges”, and for-profit institutions are examples of this), some involve recycling existing materials (Open CourseWare, and to a large extent the Khan Academy), and some involve experimenting with radically different approaches such as game-based simulation. There has already been considerable success with effective distance education, and more seems likely in the near future.

As it becomes feasible to teach and learn at a distance, so that students can be “located” on several “campuses” at once, students will have no reason to take all their coursework from a single institution. A question arises: If coursework comes from different “campuses”, who defines curriculum? Standardizing curriculum, as is already done in some professional graduate programs, is one way to achieve address this problem — that is, we may define curriculum extra-institutionally, “above the campus”. Such standardization requires cross-institutional collaboration, oversight from professional associations or guilds, and/or government regulation. None of this works very well today, in part because such standardization threatens institutional autonomy and distinctiveness. But effective distance teaching and learning may impel change.

As courses relate to curricula without depending on a particular institution, it becomes possible to imagine divorcing the offering of courses from the awarding of degrees. In this radical, no-longer-vertical future, some institutions might simply sell instruction and other learning resources, while others might concentrate on admitting students to candidacy, vetting their choices of and progress through coursework offered by other institutions, and awarding degrees. (Of course, some might try to continue both instructing and certifying.) To manage all this, it will clearly be necessary to gather, hold, and appraise student records in some shared or central fashion.

To the extent this projection is valid, not only does the role of IT within institutions change, but the very role of institutions in higher education changes. It remains important that local support be available to support the IT components of distinctive coursework, and of course to support research, but almost everything else — administrative and community services, infrastructure, general support — becomes either so standardized and/or outsourced as to require no institutional support, or becomes an activity for higher education generally rather than colleges or universities individually. In the extreme case, the typical institution really doesn’t need a central IT organization.

In this scenario, individual colleges and universities don’t need Brads.

“…What Should We Tell the Yugo-Slavian Police?”

Poirot’s second solution to the Ratchett murder (everyone including the butler did it) requires astonishing and improbable synchronicity among a large number of widely dispersed individuals. That’s fine for a mystery novel, but rarely works out in real life.

I therefore don’t suggest that the radical scenario I sketched above will come to pass. As many scholars of higher education have pointed out, colleges and universities are organized and designed to resist change. So long as society entrusts higher education to colleges and universities and other entities like them, we are likely to see evolutionary rather than radical change. So my extreme scenario, perhaps absurd on its face, seeks to only to suggest that we would do well to think well beyond institutional boundaries as we promote IT in higher education and consider its transformative potential.

And more: if we’re serious about the potentially transformative role of mobility, broadband, and the cloud in higher education, we need to consider not only what IT might change but also what effects that change will have on IT itself — and especially on its role within colleges and universities and across higher education.

Individual Utility, Joint Action, and The Prisoner’s Dilemma

Photo of Ken ArrowBack in 1977, Ken Arrow, having won the Nobel Prize five years earlier, wondered about the internal functioning of firms. “To what extent is it necessary for the efficiency of a corporation,” he wrote, “that its decisions be made at a high level where a wide degree of information is, or can be made, available? How much, on the other hand, is gained by leaving a great deal of latitude to individual departments which are closer to the situations with which they deal, even though there may be some loss due to imperfect coordination?” The answer depends somewhat on whether the firm has one goal or several, on the correlation among multiple goals, and the degree to which different departments contribute to different goals.

In general, though, the answer is sobering for advocates of decentralization. The severally optimal choices of departments rarely combine to yield the jointly optimal choice for the overall enterprise. That’s not to say that centralization is wrong, of course. It merely means that one must balance the healthy and interesting diversity that results from decentralization against the overall inefficiency it can cause.

If we shift focus from the firm to enterprises within an economic sector, the same observations hold. To the extent enterprises pursue diverse goals primarily for their own benefit rather than for the efficiency of the entire sector, that sector will be both diverse and inefficient — perhaps to the extremes of idiosyncrasy and counterproductivity. Put differently, if the actors within a sector value individuality, they will sacrifice sector-wide efficiency; if they value sector-wide efficiency, they must sacrifice individuality.

Photo of Doc HoweHigher education traditionally has placed a high value on institutional individuality. Some years back a Harvard faculty colleague of mine, Harold “Doc” Howe II (who had been US Commissioner of Education under Lyndon Johnson), observed how peculiar it was that mergers and acquisitions were so rarely contemplated, let alone achieved, in higher education, even though by any rational analysis there were myriad opportunities for interesting, effective mergers. (Does the United States really need almost 4,000 nonprofit, degree-granting postsecondary institutions, not to mention 14,000 public school districts?) Among research universities, for example, Case Western Reserve University and Carnegie-Mellon University were two of the few successful mergers, there were some instances of acquisitions and subordinations (I’m not counting Brown/Pembroke, Columbia/Barnard, Tufts/Jackson, or their kin), and several prominent failures — for example, the failed attempts to merge the Cambridge anchors Harvard and MIT. (Wikipedia’s page on college mergers lists fewer than 100 mergers of any kind.)

Photo of Fermilab detectorIf higher education isn’t going to gain efficiency through institutional aggregation, then its only option is to do so through institutional collaboration. There are lots of good examples where this has happened: I’d include athletic leagues, part of whose purpose is to negotiate effectively with networks; library collaborations, such as OCLC, that seek to reduce redundant effort; research collaborations, such as Fermilab, through which institutions share expensive facilities; and IT collaborations, such as Internet2.

That last is a bit different from the others, in that involves a group of institutions joining forces to buy services together. Why is joint procurement like that so rare in US higher education? I think there are two tightly connected reasons:

  • US higher education has valued institutional individuality far more highly than collective efficiency — that is, it assigns less importance to collective utility (that’s a microeconomics term for the value an actor expects) than to individual utility.
  • Photo of Ryan OakesAt the same time, it has failed to make the critical distinction between what Ryan Oakes, of Accenture‘s higher-education practice, recently called “differentiating” activities (those on which institutions reasonably compete) and generic “non-differentiating” activities (those where differences among peers are irrelevant to success). As a result, institutions have behaved competitively in all but a few contexts, even in those non-differentiating areas where collaboration is the right answer.

Although it’s a bit of a caricature, the situation somewhat resembles the scenario for the Rand Corporation‘s 1950s-era game-theory test, The Prisoner’s Dilemma. Here’s a version from Wikipedia:

Two suspects are arrested by the police. The police have insufficient evidence for a conviction, and, having separated the prisoners, visit each of them to offer the same deal. If one testifies for the prosecution against the other (defects) and the other remains silent (cooperates), the defector goes free and the silent accomplice receives the full one-year sentence. If both remain silent, both prisoners are sentenced to only one month in jail for a minor charge. If each betrays the other, each receives a three-month sentence. Each prisoner must choose to betray the other or to remain silent. Each one is assured that the other would not know about the betrayal before the end of the investigation. How should the prisoners act?

Photo of Jake and EarlThe dilemma is this:

  • The optimal individual choice for each prisoner is to rat out the other — that is, to “defect” — since this guarantees him or her a sentence of no more than three months, with a shot at freedom if the other prisoner remains silent. Individuals seeking to maximize their own success (to make a “utility-maximizing rational choice”, in microeconomic terms) thus choose to defect. In decision-analytic terms, since prisoner A has no idea what prisoner B will do, A assigns a probability of .5 to each possible choice B might make. A multiplies those probabilities by the consequences to obtain the expected values of his or her two options: (3)(.5)+(0)(.5) = 1.5 months for defecting, and (12)(.5)+(1)(.5) = 6.5 months for cooperating. A chooses to defect. B does the same calculation, and also chooses to defect. Since both choose to defect, each gets a three-month sentence, and they serve a total of six months in jail.
  • The optimal choice for the two prisoners together, as measured by the total of their two sentences, is for both to remain silent, that is, to cooperate. This yields a total sentence of one month for each prisoner, or a total of two months total. In contrast, defect/cooperate and cooperate/defect each yield twelve months (one year for one prisoner, freedom for the other) and defect/defect yields six months (three months for each). So the best joint choice is for A and B both to remain silent.

So each prisoner acting in his or her own self interest yields more individual and total prison time than each acting for their joint good — each would serve three months rather than one. But since A cannot know that B will cooperate and vice versa, each of them chooses self interest, and both end up worse off.

Let's Make a DealThe situation isn’t quite the same for several colleges that might negotiate together for a good deal from a vendor, mostly because no one will get anything for free. But a problem like the prisoner’s dilemma arises when one or more members of the group conclude that they can get a better deal from the vendor by themselves than what they think the group would obtain. If those members try to cut side deals, the incentive for the vendor to deal with the other members shrinks, especially if the defecting members’ deals consume a substantial fraction of the vendor’s price flexibility. The vendor prefers doing a couple of side deals to the overall deal so long as the side deals require less total discount than the group deal would. Members have every incentive to cut side deals, vendors prefer a small number of side deals to a blanket deal, and so unless all the colleges behave altruistically a joint deal is unlikely.

TV Guide coverAnd so the $64 question: What would break this cycle? The answer is simple: sharing information, and committing to joint action. If the prisoners could communicate before deciding whether to defect or cooperate, their rational choice would be to cooperate. If colleges shared information about their plans and their deals, the likelihood of effective joint action would increase sharply. That would be good for the colleges and not so good for the vendor. From this perspective, it’s clear why non-disclosure clauses are so common in vendor contracts.

In the end, the only path to effective joint action is a priori collaboration — that is, agreeing to pool resources, including clout and information, and work together for the common good. So long as colleges and universities hold back from collaboration (for example, saying, as about 15% of respondents did in a recent EDUCAUSE survey, that their institutions would wait to see what others achieved before committing to collaboration), successful joint action will remain difficult.

GoTo, Gas Pedals, & Google: What Students Should Know, and Why That’s Not What We Teach Them

In the 1980s I began teaching a course in BASIC programming in the Harvard University Extension, part of an evening Certificate of Advanced Study program for working students trying to get ahead. Much to my surprise, students immediately filled the small assigned lecture hall to overflowing, and nearly overwhelmed my lone teaching assistant.

Within two years, the course had grown to 250+ students. They spread throughout the second-largest room in the Harvard Science Center (Lecture Hall C– the one with the orange seats, for those of you who have been there). I now had a dozen TAs, so I was in effect not only teaching the BASIC course, but also leading a seminar on the pedagogical challenge of teaching non-technical students how to write structured programs in a language that heretically allowed “GoTo” statements.

Computer Literacy?

There’s nothing very interesting or exciting about learning to program in BASIC. Although I flatter myself a good teacher, even my best efforts to render the material engaging – for example, assignments that variously involved having students act out various roles in Stuart Madnick’s deceptively simple Little Man Computer system, automating Shirley Ellis‘s song The Name Game, and modeling a defined-benefit pension system – in no way explained the course’s popularity.

So what was going on? I asked students why they were taking my course. Most often, they said something about “computer literacy”. That’s a useful (if linguistically confused) term, but in this case a misleading one.

If the computer becomes important, the analogy seems to run, then the ability to use a computer becomes important, much as the spread of printed material made reading and writing important. So far so good. For the typical 1980s employee, however, using computers in business and education centered on applications like word processors, spreadsheets, charts, and maybe statistical packages. Except for those within the computer industry, it rarely involved writing code in high-level languages.

BASIC programming thus had little direct relevance to the “computer literacy” students actually needed. The print era made reading and writing important  for the average worker and citizen. But only printers needed adeptness with the technologies of paper, ink, composition (in the Linotype sense), and presses. That’s why the analogy fails: programming, by the 1980s, was about making computer applications, not using them. That’s the opposite of what students actually needed.

Yet clearly students viewed the ability to program in BASIC – even “Shirley Shirley bo-birley…” – as somehow relevant to the evolving challenges of their jobs. If BASIC programming wasn’t directly relevant to actual computer literacy, why did they believe this? Two explanations of its indirect importance suggest themselves:

  • Perhaps ability to program was an accessible indicator of more relevant yet harder-to-measure competence. Employers might have been using programming ability, however irrelevant directly, as a shortcut measure to evaluate and sort job applicants or promotion candidates. (This is essentially a recasting of Lester Thurow‘s “job queues” theory about the relationship between educational attainment and hiring, namely that educational attainment signals the ability to learn quickly rather than provides direct training.) Applicants or employees who believed this was happening would thus perceive programming ability as a way to make themselves appear attractive, even though the skill was actually irrelevant.
  • Perhaps students learned to program simply to gain confidence that they could cope with the computer age.

I propose a third explanation:

  • As technology evolves, generations that experience the evolution tend to believe it important for the next generation to understand what came before, and advise students accordingly.

That is, we who experience technological change believe that competence with current technology benefits from understanding prior technology – a technological variant of George Santayana’s aphorism “Those who cannot remember the past are condemned to repeat it” – and send myriad direct and indirect messages to our successors and students that without historical understanding one cannot be fully competent.

Shifting Gears

My father taught me to drive on the family’s 1955 Chevy station wagon, a six-cylinder car with a three-speed, non-synchromesh, stalk-mounted-shifter manual transmission and power nothing. After a few rough sessions learning to get the car moving without bucking and stalling, to turn and shift at the same time, and to double-clutch and downshift while going downhill, I became a pretty good driver.

But my father, who had learned to drive on a Model T Ford with a planetary transmission and separate throttle and spark-advance controls, remained skeptical of my ability. He was always convinced that since I didn’t understand that latter distinction, I really wasn’t operating the car as well as I might. (Today’s “accelerator”, if I understand it correctly, combines the two functions: it tells the engine to spin faster, which is what the spark-advance lever did, and then feeds it the necessary fuel mixture, which was the throttle’s function.)

Years later it came time for our son’s first driving lesson. We were in our automatic-transmission Toyota Camry, equipped with power steering and brakes, on a not-yet-opened Cape Cod subdivision’s newly paved streets. Apparently forgetting how irrelevant the throttle/spark distinction had been to my learning to drive, I delivered a lecture on what was going on in the automatic transmission – why it didn’t need a clutch, how it was deciding when to shift gears, and so forth. Our son listened patiently, and then rapidly learned to drive the Camry very well without any regard to what I’d explained. My lecture had absolutely no effect on his competence (at least not until several years later, I like to believe, when he taught himself to drive a friend’s four-in-the-floor VW).

Technological Instruction

Which brings me to the present, and the challenge of preparing today’s students for tomorrow’s technological workplaces. What should be our advice to them be, either explicitly – in the form of direct instruction or requirements – or implicitly, in the form of contextual guidance such as induced so many students to take my BASIC course? In particular, how can we break away from the generational tendency to emphasize how we got here rather than where we’re going?

I don’t propose to answer that question fully here, but rather to sketch, though two examples, how a future-oriented perspective might differ from a generational one. The first example is cloud services, and the second example is online information.

Cloud Services

I started writing this essay on my DC office computer. I’m typing these words on an old laptop I keep in my DC apartment, and I’ll probably finish it on my traveling computer or perhaps on my Chicago home computer. A big problem ensues: How do I keep these various copies synchronized? My answer is a service called Dropbox, which copies documents I save to its central servers and then disseminates them automatically to all my other computers and even my phone. What I need is to have the same document up to date wherever I’m working. Dropbox achieves this by synchronizing multiple copies of the same documents across multiple computers and other devices.

Alternatively, I might gotten what I need– having the same document up to date wherever I’m working– by drafting this post as a Google or Live document. Rather than editing different synchronized copies of the document, I’d actually have been editing the same remote document from different computers rather than synchronizing local copies among those computers.

My instincts are that this difference between synchronized and remote documents is important, something that I, as an educator, should be sure the next generation understands. When my son asks about how to work across different machines, my inclination is to explain the difference between the options, how one is giving way to the other, and so forth. Is that valid, or is this the same generational fallacy that led my father to explain throttles and spark advance or me to explain clutches and shifting?

Online Information

When I came to the history quote above, I couldn’t remember its precise wording or who wrote it. That’s what the Internet is for, right? Finding information?

I typed “those who ignore the past are doomed”, which was the partial phrase I remembered, into Google’s search box. Among the first page of hits, the first time I tried this, were links to answers.com, wikiquote.org, answers.google.com, wikipedia.org, and www.nku.edu. The first four of those pointed me to the correct quote, usually giving the specific source including edition and page. The last, from a departmental web page at Northern Kentucky University, blithely repeated the incorrect quote (but at least ascribed it to Santayana). One of the sources (answers.com) pointed to an earlier, similar quote from Edmund Burke. The Wikipedia entry reminded me that the quote is often incorrectly ascribed to Plato.

I then typed the same search into Bing’s search box. Many links on its first page of results were the same as Google’s — answers.com and wikiquotes — but there were more links to political comments (most of them embodying incorrect variations on the quote), and one link to a conspiracy-theorist page linking the Santayana quote to George Orwell’s “He who controls the present, controls the past. He who controls the past, controls the future”.

It wasn’t hard for me to figure out which search results to heed and which to ignore. The ability to screen search results and then either to choose which to trust or to refine the search is central to success in today’s networked world. What’s the best way to inculcate that skill in those who will need it?

I’ve been working in IT since before the Digital Equipment Corporation‘s Altavista, in its original incarnation, became the first Web search engine. The methods different search services use to locate and rank information have always been especially interesting. The early Altavista ranked pages based on how many times search words appeared in them – a method so obviously manipulable (especially by sneaking keywords into non-displayed parts of Web pages) that it rapidly gave way to more robust approaches. The links one gets from Google or Bing today come partly from some very sophisticated ranking said to be based partly on user behavior (such as whether a search seems to have succeeded) and partly on links among sites (this was Google’s original innovation, called PageRank) – but also, quite openly and separately, from advertisers paying to have their sites displayed when users search for particular terms.

Here again the generational issue arises. Obviously we want to teach future generations how to search effectively, and how to evaluate the quality and reliability of the information their searches yield. But do we do this by explaining the evolution of search and ranking algorithms – the generational approach based on the preceding paragraph – or by teaching more generally, as bibliographic instructors in libraries have long done, how to cross-reference, assess, and evaluate information whatever its form?

Understanding throttles and spark advance did not help me become a better driver, understanding BASIC probably didn’t help prepare my Harvard students for their future workplaces, and explaining diverse cloud mechanisms and search algorithms isn’t the best way for us to maximize our students’ technological competence. Much as I love explaining things, I think the essence of successful technological teaching is to focus on the future, on the application and consequences of technology rather than its origins.

That doesn’t mean that we should eschew the importance of history, but rather than history does not suffice as a basis for technological instruction. It’s easier to explain the past than to anticipate the future, but that last, however risky and uncertain and detached from our personal histories, is our job.

Network Neutrality: Who’s Involved? What’s the Issue? Why Do We Give a Shortstop?

Who’s on First, Abbott and Costello’s classic routine, first reached the general public as part of the Kate Smith Radio Hour in 1938. It then appeared on almost every radio network at some time or another before reaching TV in the 1950s. (The routine’s authorship, as I’ve noted elsewhere, is more controversial than its broadcast history.) The routine can easily be found many places on the Internet – as a script, as audio recordings, or as videos. Some of its widespread availability is from widely-used commercial services (such as YouTube), some is from organized groups of fans, and some is from individuals. The sources are distributed widely across the Internet (in the IP-address sense).

I can easily find and read, listen to, or watch Who’s on First pretty much regardless of my own network location. It’s there through the Internet2 connection in my office, through my AT&T mobile phone, through my Sprint mobile hotspot, through the Comcast connections where I live, and through my local coffeeshops’ wireless in DC and Chicago.

This, most of us believe, is how the Internet should work. Users and content providers pay for Internet connections, at rates ranging from by buying coffee to thousands of dollars, and how fast one’s connection is thus may vary by price and location. One may need to pay providers for access, but the network itself transmits similarly no matter where stuff comes from, where it’s going, or what its substantive content is. This, in a nutshell, is what “network neutrality” means.

Yet network neutrality remains controversial. That’s mostly for good, traditional political reasons. Attaining network neutrality involves difficult tradeoffs among the economics of network provision, the choices available to consumers, and the public interest.

Tradeoffs become important when they affect different actors differently. That’s certainly the case for network neutrality:

  • Network operators (large multifunction ones like AT&T and Comcast, large focused ones like Verizon and Sprint, small local ones like MetroPCS, and business-oriented ones like Level3) want the flexibility to invest and charge differently depending on who wants to transmit what to whom, since they believe this is the only way to properly invest for the future.
  • Some Internet content providers (which in some cases, like Comcast, are are also networks) want to know that what they pay for connectivity will depend only on the volume and technical features of their material, and not vary with its content, whereas others want the ability to buy better or higher-priority transmission for their content than competitors get — or perhaps to have those competitors blocked.
  • Internet users want access to the same material on the same terms regardless of who they are or where they are on the network.

Political perspectives on network neutrality thus vary depending on who is proposing what conditions for whose network.

But network neutrality is also controversial because it’s misunderstood. Many of those involved in the debate either don’t – or won’t – understand what it means for a public network to be neutral, or indeed what the difference is between a public and a private network. That’s as true in higher education as it is anywhere else. Before taking a position on network neutrality or whose job it is to deal with it, therefore, it’s important to define what we’re talking about. Let me try to do that.

All networks discriminate. Different kinds of network traffic can entail different technical requirements, and a network may treat different technical requirements differently. E-mail, for example, can easily be transmitted in bursts – it really doesn’t matter if there’s a fifty-millisecond delay between words – whereas video typically becomes jittery and unsatisfactory if the network stream isn’t steady. A network that can handle email may not be able to handle video. One-way transmission (for example, a video broadcast or downloading a photo) can require very different handling than a two-way transmission (such as a videoconference). Perhaps even more basic, networks properly discriminate between traffic that respects network protocols – the established rules of the road, if you will – and traffic that attempts to bypass rule-based network management.

Network neutrality does not preclude discrimination. Rather, as I wrote above, a network behaves neutrally if it avoids discriminating on the basis of (a) where transmission originates, (b) where transmission is destined, and (c) the content of the transmission. The first two elements of network neutrality are relatively straightforward, but the third is much more challenging. (Some people also confuse how fast their end-user connection is with how quickly material moves across the network – that is, someone paying for a 1-megabit connection considers the Internet non-neutral if they don’t get the same download speeds as someone paying for a 26-megabit connection – but that’s a separate issue largely unrelated to neutrality.) In particular, it can be difficult to distinguish between neutral discrimination based on technical requirements and non-neutral discrimination based on a transmission’s substance.In some cases the two are inextricably linked.

Consider several ways network operators might discriminate with regard to Who’s on First.

  • Alpha Networks might decide that its network simply can’t handle video streaming, and therefore might configure its systems not to transmit video streams. If a user tries to watch a YouTube version of the routine, it won’t work if the transmission involves Alpha Networks. The user will still be able to read the script or listen to an audio recording of the routine (for example, any of those listed in the Media|Audio Clips section of http://www.abbottandcostello.net/). Although refusing to carry video is clearly discrimination, it’s not discrimination based on source, destination, or content. Alpha Networks therefore does not violate network neutrality.
  • Beta Networks might be willing to transmit video streams, but only from providers that pay it to do so. Say, purely hypothetically, that the Hulu service – jointly owned by NBC and Fox – were to pay Beta Networks to carry its video streams, which include an ad-supported version of Who’s on First. Say further that Google, whose YouTube streams include many Who’s on First examples, were to decline to pay. If Beta Networks transmitted Hulu’s versions but not Google’s, it would be discriminating on the basis of source – and probably acting non-neutrally.

What if Hulu and Google use slightly different video formats? Beta might claim that carrying Hulu’s traffic but not Google’s was merely technical discrimination, and therefore neutral. Google would probably disagree. Who resolves such controversies – market behavior, the courts, industry associations, the FCC – is one of the thorniest points in the national debate about network neutrality. Onward…

  • Gamma Networks might decide that Who’s on First ridicules and thus disparages St. Louis (many performances of the routine refer to “the St Louis team”, although others refer to the Yankees). To avoid offending customers, Gamma might refuse to transmit Who’s on First, in any form, to any user in Missouri. That would be discrimination on the basis of destination. Gamma would violate the neutrality principle.
  • Delta Networks, following Gamma’s lead, might decide that Who’s on First disparages not just St. Louis, but professional baseball in general. Since baseball is the national pastime, and perhaps worried about lawsuits, Delta Networks might decide that Who’s on First should not be transmitted at all, and therefore it might refuse to carry the routine in any form. That would be discrimination on the basis of content. Delta would be violating the neutrality principle.
  • Epsilon Networks, a competitor to Alpha, might realize that refusing to carry video disserves customers. But Epsilon faces the same financial challenges as Alpha. In particular, it can’t raise its general prices to cover the expense of transmitting video since it would then lose most of its customers (the ones who don’t care about video) to Alpha’s lesser but less expensive service. Rather than block video, Epsilon might decide to install equipment that will enable video as a specially provided service for customers who want it, and to charge those customers – but not its non-video customers – extra for the added capability. Whether an operator violates network neutrality by charging more for special network treatment of certain content – the usual term for this is “managed services” – is another one of the thorniest issues in the national debate.

As I hope these examples make clear, there are various kinds of network discrimination, and whether they violate network neutrality is sometimes straightforward and sometimes not.  Things become thornier still if networks are owned by content providers or vice versa – or, as is more typical, if there are corporate kinships between the two. Hulu, for example, is partly owned by NBC Universal, which is becoming part of Comcast. Can Comcast impose conditions on “outside” customers, such as Google’s YouTube, that it does not impose on its own corporate cousin?

Why do we give a shortstop (whose name, lest you didn’t read to the end of the Who’s on First script, is “darn”)? That is, why is network neutrality important to higher education? There are two principal reasons.

First, as mobility and blended learning (the combination of online and classroom education) become commonplace in higher education, it becomes very important that students be able to “attend” their college or university from venues beyond the traditional campus. To this end, it is very important that colleges and universities be able to provide education to their students and interconnect researchers over the Internet. This should be constrained only by the capacity of the institution’s connection to the Internet, the technical characteristics of online educational materials and environments, and the capacity of students’ connections to the Internet.

Without network neutrality, achieving transparent educational transmission from campus to widely-distributed students could become very difficult. The quality of student experience could come to depend on the politics of the network path from campus to student.To address this, each college and university would need to negotiate transmission of its materials with every network operator along the path from campus to student. If some of those network operators negotiate exclusive agreements for certain services with commercial providers – or perhaps with other colleges or universities – it could become impossible to provide online education effectively.

Second, many colleges and universities operate extensive networks of their own, or together operate specialized inter-campus networks for education, research, administrative, and campus purposes. Network traffic inconsistent with or detrimental to these purposes is managed differently than traffic that serves them. It is important that colleges and universities retain the ability to manage their networks in support of their core purposes.

Networks that are operated by and for the use of particular organizations, like most college and university networks, are private networks. Private and public networks serve different purposes, and thus are managed based on different principles. The distinction is important because the national network-neutrality debate – including the recent FCC action, and its evolving judicial, legislative, and regulatory consequences – is about public networks.

Private networks serve private purposes, and therefore need not behave neutrally. They are managed to advance private goals. Public networks, on the other hand, serve the public interest, and so – network-neutrality advocates argue – should be managed in accordance with public policy and goals. Although this seems a clear distinction, it can become murky in practice.

For example, many colleges and universities provide some form of guest access to their campus wireless networks, which anyone physically on campus may use. Are guest networks like this public or private? What if they are simply restricted versions of the campus’s regular network? Fortunately for higher education, there is useful precedent on this point. The Communications Assistance for Law Enforcement Act (CALEA), which took effect in 1995, established principles under which most college and university campus networks are treated as private networks – even if they provide a limited set of services to campus visitors (the so-called “coffee shop” criterion).

Higher education needs neutrality on public networks because those networks are increasingly central to education and research. At the same time, higher education needs to manage campus networks and private networks that interconnect them in support of education and research, and for that reason it is important that there be appropriate policy differentiation between public and private networks.

Regardless, colleges and universities need to pay for their Internet connectivity, to negotiate in good faith with their Internet providers, and to collaborate effectively on the provision and management of campus and inter-campus networks. So long as colleges and universities act effectively and responsibly as network customers, they need assurance that their traffic will flow across the Internet without regard to its source, destination, or content.

And so we come to the central question: Assuming that higher education supports network neutrality for public networks, do we care how its principles – that public networks should be neutral, and that private ones should be manageable for private purposes – are promulgated, interpreted, and enforced? Since the principles are important to us, as I outlined above, we care that they be implemented effectively, robustly, and efficiently. Since the public/private distinction seems to be relatively uncontroversial and well understood, the core issue is whether and how to address network neutrality for public networks.

There appear to be four different ideas about how to implement network neutrality.

  1. A government agency with the appropriate scope, expertise, and authority could spell out the circumstances that would constitute network neutrality, and prescribe mechanisms for correcting circumstances that fell short of those. Within the US, this would need to be a federal agency, and the only one arguably up to the task is the Federal Communications Commission. The FCC has acted in this way, but there remain questions whether it has the appropriate authority to proceed as it has proposed.
  2. The Congress could enact laws detailing how public networks must operate to ensure network neutrality. In general, it has proven more effective for the Congress to specify a broad approach to a public-policy problem, and then to create and/or empower the appropriate government agency to figure how what guidelines, regulations, and redress mechanisms are best. Putting detail into legislation tends to enable all kinds of special negotiations and provisions, and the result is then quite hard to change.
  3. The networking industry could create an internal body to promote and enforce network neutrality, with appropriate powers to take action when its members fail to live up to neutrality principles. Voluntary self-regulatory entities like this have been successful in some contexts and not in others. Thus far, however, the networking industry is internally divided as to the wisdom of network neutrality, and without agreement on the principle it is hard to see how there could be agreement on self-regulation.
  4. Network neutrality could simply be left to the market. That is, if network neutrality is important to customers, they will buy services from neutral providers and avoid services from non-neutral providers. The problem here is that network neutrality must extend across diverse networks, and individual consumers – even if they are large organizations such as many colleges and universities – interact only with their own “last mile” provider.

Those of us in higher education who have been involved in the network-neutrality debates have come to believe that among these four approaches the first is most likely to yield success and most likely to evolve appropriately as networking and its applications evolve. This is especially true for wireless (that is, cellular) networking, where there remain legitimate questions about what level of service should be subject to neutrality principles, and what kinds of service might legitimately be considered managed, extra-cost services.

In theory, the national debate about network neutrality will unfold through four parallel processes. Two of these are already underway: the FCC has issued an order “to Preserve Internet Freedom and Openness”, and at least two network operators have filed lawsuits challenging the FCC’s authority to do that. So we already have agency and court involvement, and we can possiible congressional actions and industry initiatives to round out the set.

One thing’s sure: This is going to become more complicated and confusing…

Lou: I get behind the plate to do some fancy catching, Tomorrow’s pitching on my team and a heavy hitter gets up. Now the heavy hitter bunts the ball. When he bunts the ball, me, being a good catcher, I’m gonna throw the guy out at first base. So I pick up the ball and throw it to who?

Bud: Now that’s the first thing you’ve said right.

Lou: I don’t even know what I’m talking about!