Posts Tagged ‘network’

Timsons, Molloys, & Collective Efficiency in Higher Education IT

It’s 2006, and we’re at Duke, for a meeting of the Common Solutions Group.PNCportrait_400x40014b2503

On the formal agenda, Paul Courant seeks to resurrect an old idea of Ira Fuch‘s, for a collective higher-education IT development-and-procurement entity provisionally called Educore.

220px-National_LambaRail_logointernet2_logo_200pxOn the informal agenda, a bunch of us work behind the scenes trying to persuade two existing higher-education IT entities–Internet2 and National LambdaRail–that they would better serve their constituencies, which overlap but do not coincide, by consolidating into a single organization.

The merged organization would both lease capacity with some restrictions (the I2 model) and “own” it free and clear (the NLR model, the quotes because in many cases NLR owns 20-year “rights to use”–RTUs–rather than actual infrastructure.) The merged organization would find appropriate ways to serve the sometimes divergent interests of IT managers and IT researchers in higher education.

westvan_houweling_doug-5x7Most everyone appears to agree that having two competing national networking organizations for higher education wastes scarce resources and constrains progress. But both NLR and Internet2 want to run the consolidated entity. Also, there are some personalities involved. Our work behind the scenes is mostly shuttle diplomacy involving successively more complex drafts of charter and bylaws for a merged networking entity.

Throughout the process I have a vague feeling of déjà vu.

educom-logo-transcause-logoPartly I’m wistfully remembering the long and somewhat similar courtship between CAUSE and Educom, which eventually yielded today’s merged EDUCAUSE. I’m hoping that we’ll be able to achieve something similar for national higher-education networking.

5238540853_62a5097a2aAnd partly I’m remembering a counterexample, the demise of the American Association for Higher Education, which for years held its annual meeting at the Hilton adjacent to Grant Park in Chicago (almost always overlapping my birthday, for some reason). AAHE was an umbrella organization aimed comprehensively at leaders and middle managers throughout higher education, rather than at specific subgroups such as registrars, CFOs, admissions directors, housing managers, CIOs, and so forth. It also attracted higher-education researchers, which is how I started attending, since that’s what I was.

AAHE collapsed, many think, because of the broad middle-management organization’s gradual splintering into a panoply of “caucuses” that eventually went their own ways, and to a certain extent its leaders aligning AAHE with too many faddish bandwagons. (To this day I wince when I hear the otherwise laudable word “assessment”.) It was also affected by the growing importance of discipline-specific organizations such as NACUBO, AACRAO, and NASPA–not to mention Educom and CAUSE–and it always vied for leadership attention with the so-called “presidential” organizations such as ACE, AAU, APLU, NAICU, and ACC.

change_logoTogether the split into caucuses and over-trendiness left AAHE with no viable general constituency or finances to continue its annual meetings, its support for Change magazine, or its other crosscutting efforts. AAHE shut down in 2005, and disappeared so thoroughly that it doesn’t even have a Wikipedia page; its only online organizational existence is at the Hoover Institution’s archives, which hold its papers.

Fox_Student_CenterAt the Duke CSG meeting I’m hoping, as we work on I2 and NLR leaders to encourage convergence, that NLR v. I2 won’t turn out like AAHE, and that instead the two organizations will agree to a collaborative process leading to synergy and merger like that of CAUSE and Educom.

We fail.

Glenn-RicartFollowing the Duke CSG meeting, NLR and I2 continue to compete. They manage to collaborate briefly on a joint proposal for federal funding, a project called “U.S. UCAN“, but then that collaboration falls apart as NLR’s finances weaken. Internet2 commits to cover NLR’s share of U.S. UCAN, an unexpected burden. NLR hires a new CEO to turn things around; he leaves after less than a year. NLR looks to the private sector for funding, and finds some, but it’s not enough: its network shuts down abruptly in 2014.

In the event, Internet2 survives, especially by extending its mission beyond higher education, and by expanding its collective-procurement activities to include a diversity of third-party products and services under the Net+ umbrella. It also builds some cooperative ventures with EDUCAUSE, such as occasional joint conferences and a few advocacy efforts.

Educause_LogoMeanwhile, despite some false starts and missed opportunities, the EDUCAUSE merger succeeds. The organization grows and modernizes. It tackles a broad array of services to and advocacy on behalf of higher-education IT interests, organizations, and staff.

Portrait of New York Yankees guest coach Yogi Berra during spring training photo shoot at Legends Field. Tampa, Florida 3/2/2005 (Image # 1225 )

But now I’m having a vague feeling of déjà vu all over again. As was the case for I2/NLR, I sense, there’s little to be gained and some to be lost from Internet2 and EDUCAUSE continuing as separate organizations.

unizin2Partly the issue is simple organizational management efficiency: in these times of tight resources for colleges, universities, and state systems, does higher education IT really need two financial staffs, two membership-service desks, two marketing/communications groups, two senior leadership teams, two Boards, and for that matter two CEOs? (Throw ACUTA, Unizin, Apereo, and other entities into the mix, and the question becomes even more pressing.)

7192011124606AMBut partly the issue is deeper. EDUCAUSE and Internet2 are beginning to compete with one another for scarce resources in subtle ways: dues and memberships, certainly, but also member allegiance, outside funding, and national roles. That competition, if it grows, seems perilous. More worrisome still, some of the competition is of the non-salutary I’m OK/You’re Not OK variety, whereby each organization thinks the other should be subservient.

1294770315_1We don’t quite have a Timson/Molloy situation, I’m glad to say. But with little productive interaction at the organizations’ senior levels to build effective, equitable collaboration, there’s unnecessary risk that competitive tensions will evolve into feudal isolation.

If EDUCAUSE and Internet2 can work together on the basis of mutual respect, then we can minimize that risk, and maybe even move toward a success like CAUSE/Educom becoming EDUCAUSE. If they can’t–well, if they can’t, then I think about AAHE, and NLR’s high-and-dry stakeholders, and I worry.

Mythology, Belief, Analytics, & Behavior

MIT_Building_10_and_the_Great_Dome,_Cambridge_MAI’m at loose ends after graduating. The Dean for Student Affairs, whom I’ve gotten to know through a year of complicated political and educational advocacy, wants to know more about MIT‘s nascent pass/fail experiment, under which first-year students receive written rather than graded evaluations of their work.

MIT being MIT, “know more” means data: the Dean wants quantitative analysis of patterns in the evaluations. I’m hired to read a semester’s worth, assign each a “Usefulness” score and a “Positiveness” score, and then summarize the results statistically.

Two surprises. First, Usefulness turns out to be much higher than anyone had expected–mostly because evaluations contain lots of “here’s what you can do to improve” advice, rather than lots of terse “you would have gotten a B+” comments, as had been predicted. Second, Positiveness distributes remarkably as grades had for the pre-pass/fail cohort, rather than skewing higher, as had been predicted. Even so, many faculty continue to believe both predictions (that is, they think written evaluations are both generally useless and inappropriately positive).

20120502161716-1_0A byproduct of the assignment is my first exposure to MIT’s glass-house computer facility, an IBM 360 located in the then-new Building 39. In due course I learn that Jay Forrester, an MIT faculty member, had patented the use of 3-D arrays of magnetic cores for computer memory (the read-before-write use of cores, which enabled Forrester’s breakthrough, had been patented by An Wang, another faculty member, of the eponymous calculators and word processors). IBM bought Wang’s patent, but not Forrester’s, and after protracted legal action eventually settled with Forrester in 1964 for $13-million.

According to MIT mythology, under the Institute’s intellectual-property policy half of the settlement came to the Institute, and that money built Building 39. Only later do I wonder whether the Forrester/IBM/39 mythology is true. But not for long: never let truth stand in the way of a good story.

Not just because mythology often involves memorable, simple stories, belief in mythology is durable. This is important because belief so heavily drives behavior. That belief resists even data-driven contradiction–data analysis rarely yields memorable, simple stories–is one reason analytics so often prove curiously ineffective in modifying institutional behavior.

Two examples, both involving the messy question of copyright infringement by students and what, if anything, campuses should do about it.

44%

laurelI’m having lunch with a very smart, experienced, and impressive senior officer from an entertainment-industry association, whom I’ll call Stan. The only reason universities invest heavily in campus networks, Stan tells me, is to enable students to download and share ever more copyright-infringing movies, TV shows, and music. That’s why campuses remain major distributors of “pirated” entertainment, he says, and therefore why it’s appropriate to subject higher education generally to regulations and sanctions such as the “peer to peer” regulations from the 2008 Higher Education Opportunity Act.

That Stan believes this results partly from a rhetorical problem with high-performance networks, such as the research networks within and interconnecting colleges and universities. High-performance networks–even those used by broadcasters–usually are engineered to cope with peak loads. Since peaks are occasional, most of the time most network capacity goes unused. If one doesn’t understand this–as Stan doesn’t–then one assumes that the “unused” capacity is in fact being used, but for purposes not being disclosed.

And, as it happens, there’s mythology to fill in the gap: According to a 2005 MPAA study, Stan tells me, higher education accounts for almost half of all copyright infringement. So MPAA, and therefore Stan, knows what campuses aren’t telling us: they’re upgrading campus networks to enable infringement.

But Stan is wrong. There are two big problems with his belief.

MPAAFirst, shortly after MPAA asserted, both publicly and in letters to campus presidents, that 44% of all copyright infringement emanates from college campuses, which is where Stan’s “almost half” comes from, MPAA learned that its data contractor had made a huge arithmetic error. The correct estimate should have been more like 10-15%. But the corrected estimate was never publicized as extensively as the erroneous one: the errors that statisticians make live after them; the corrections are oft interred with their bones.

Second, if Stan’s belief is correct, then there should be little difference among campuses in the incidence of copyright infringement, at least among campuses with research-capable networking. Yet this isn’t the case. As I’ve found researching three years of data on the question, the distribution of detected infringement is highly skewed. Most campuses are responsible for little or no distribution of infringing material, presumably because they’re using Packetlogic, Palo Alto firewalls, or similar technologies to manage traffic. Conversely, a few campuses account for the lion’s share of detected infringement.

So there are ample data and analytics contradicting Stan’s belief, and none supporting it. But his belief persists, and colors how he engages the issues.

Targeting

imagesOKVW44NDI’m having dinner with the CIO from an eminent research university; I’ll call her Samantha, and her campus Helium (the same name it has in the infringement-data post I cited above). We’re having dinner just as I’m completing my 2013 study, in which Helium has surpassed Hydrogen as the largest campus distributor of copyright-infringing movies, TV shows, and music.

In fact, Helium accounts for 7% of all detected infringement from the 5,000 degree-granting colleges and universities in the United States. I’m thinking that Samantha will want to know this, that she will try to figure out what Helium is doing–or not doing–to stand out as such a sore thumb among peer campuses, and perhaps make some policy or practice changes to bring Helium into closer alignment.

But no: Samantha explains to me that the data are entirely inaccurate. Most of the infringement notices Helium receives are duplicates, she tells me, and in any case the only reason Helium receives so many is that the entertainment industry intentionally targets Helium in its detection and notification processes. Since the data are wrong, she says, there’s no need to change anything at Helium.

I offer to share detailed data with Helium’s network-security staff so that they can look more closely at the issue, but Samantha declines the offer. Nothing changes, and in 2014 Helium is again one of the top recipients of infringement notices (although Hydrogen regains the lead it had held in 2012).

The data Samantha declines to see tell an interesting story, though. The vast majority of Helium’s notices, it turns out, are associated with eight IP addresses. That is, each of those eight IP addresses is cited in hundreds of notices, which may account for Samantha’s comment about “duplicates”. Here’s what’s interesting: the eight addresses are consecutive, and they each account for about the same number of notices. That suggests technology at work, not individuals.

image0021083244899217As in Stan’s case, it helps to know something about how campus networks work. Lots of traffic distributed evenly across a small number of IP addresses sounds an awful lot like load balancing, so perhaps those addresses are the front end to some large group of users. “Front end to some large group of users” sounds like an internal network using Network Address Translation (NAT) for its external connections.

NAT issues numerous internal IP addresses to users, and then technologically translates those internal addresses traceably into a much smaller set of external addresses. Most campuses use NAT to conserve their limited allocation of external IP addresses, especially for their campus wireless networks. NAT logs, if kept properly, enable campuses to trace connections from insiders to outside and vice versa, and so to resolve those apparent “duplicates”.

So although it’s true that there are lots of duplicate IP addresses among the notices Helium receives, this probably stems from Helium’s use of NAT on its campus wireless. Helium’s data are not incorrect. If Helium were to manage NAT properly, it could figure out where the infringement is coming from, and address it.

Samantha’s belief that copyright holders target specific campuses, like Stan’s that campuses expand networks to encourage infringement, has a source–in this case, a presentation some years back from an industry association to a group of IT staff from a score of research universities. (I attended this session.) Back then, we learned, the association did target campuses, not out of animus, but simply as a data-collection mechanism. The association would choose a campus, look for infringing material being published from the campus’s network, send notices, and then move on to another campus.

utorrent-facebook-mark-850-transparentSince then, however, the industry had changed its methodology, in large part because the BitTorrent protocol replaced earlier ones as the principal medium for download-based infringement. Because of how BitTorrent works, the industry’s methodology shifted from searching particular networks to searching BitTorrent indexes for particularly popular titles and then seeing which networks were making those titles available.

I spent lots of time recently with the industry’s contractors looking closely at that methodology. It appears to treat campus networks equivalently to each other and to commercial networks, and so it’s unlikely that Helium was being targeted as Samantha asserted.

If Samantha had taken the infringement data to her security staff, they probably would have discovered the same thing I did, and either used NAT data to identify offenders, or perhaps to justify policy changes for the wireless network. Same goes for exploring the methodology. But instead Samantha relied on her belief that the data were incorrect and/or targeted

Promoting Analytic Effectiveness

Because of Stan’s and Samantha’s belief in mythology, their organizations’ behavior remains largely uninformed by analytics and data.

decision-treeA key tenet in decision analysis holds that information has no value (other than the intrinsic value of knowledge) unless the decisions an individual or an institution have before them will turn out differently depending on the information. That is, unless decisions depend on the results of data analysis, it’s not worth collecting or analyzing data.

Colleges, universities, and other academic institutions have difficulty accepting this, since the intrinsic value of information is central to their existence. But what’s valuable intrinsically isn’t necessarily valuable operationally.

Generic praise for “data-based decision making” or “analytics” won’t change this. Neither will post-hoc documentation that decisions are consistent with data. Rather, what we need are good, simple stories that will help mythology evolve: case studies of how colleges and universities have successfully and prospectively used data analysis to change their behavior for the better. Simply using data analysis doesn’t suffice, and neither does better behavior: we need stories that vividly connect the two.

Ironically, the best way to combat mythology is with–wait for it–mythology…

Revisiting IT Policy #2: Campus DMCA Notices

Under certain provisions from the Digital Millennium Copyright Act, copyright holders send a “notification of claimed infringement” (sometimes called a “DMCA” or “takedown” notice) to Internet service providers, such as college or university networks, when they find infringing material available from the provider’s network. I analyzed counts of infringement notices from the four principal senders to colleges and universities over three time periods (Nov 2011-Oct 2012, Feb/Mar 2013, and Feb/Mar 2014).

In all three periods, most campuses received no notices, even campuses with dormitories. Among campuses receiving notices, the distribution is highly skewed: a few campuses account for a disproportionately large fraction of the notices. Five campuses consistently top the distribution in each year, but beyond these there is substantial fluctuation from year to year.

The volume of notices sent to campuses varies somewhat positively with their size, although some important and interesting exceptions keep the correlation small. The incidence of detected infringement varies strongly with how residential campuses are. It varies less predictably with proxy measures of student-body affluence.

I elaborate on these points below.

Patterns

The estimated total number of notices for the twelve months ending October 2012 was 243,436. The actual number of notices in February/March 2013 was 39,753, and the corresponding number a year later was 20,278.

The general pattern was the same in each time period.

  • According to the federal Integrated Postsecondary Education Data Service (IPEDS), from which I obtained campus attributes, there are 4,904 degree-granting campuses in the United States. Of these, over 80% received no infringement notices in any of the three time periods.
  • 90% of infringement notices went to campuses with dormitories.
  • Of the 801 institutions that received at least one notice in one period, 607 received at least one notice in two periods, and 437 did so in all three. The distribution was highly skewed among the campuses that received at least one infringement notice. The top two recipients in each period were the same: they alone accounted for 12% of all notices in 2012, and 10% in 2013 and 2014.
  • In 2012, 10 institutions accounted for a third of all notices, and 41 accounted for two thirds. In 2013, the distribution was only a little less skewed: 22 institutions accounted for a third of all notices, and 94 accounted for two thirds. In 2014, 22 institutions also accounted for a third of all notices, and 99 accounted for two thirds.

Campus Type

In 2014, just 590 of the 4,904 campuses received infringement notices in 2014. Here is a breakdown by institutional control and type:

Capture

Here are the same data, this time broken down by campus size and residential character (using dormitory beds per enrolled student to measure the latter; the categories are quintiles):

Capture2

About a third of all notices went to very large campuses in the middle residential quintile. In keeping with the classic Pareto ratio, the largest 20% of campuses account for 80% of all notices (and enroll ¾ of all students). Although about half of the largest group is nonresidential (mostly community colleges, plus some state colleges), only a few of them received notices.

Campus Distributions

The top two among the 100 campuses that received the most notices in Feb/Mar 2014 received over 1,000 notices each in the two months. The next highest campus received 615. As the graph below shows, the top 100 campuses accounted for two thirds of the notices; the next 600 campuses accounted for the remaining third (click on this graph, or the others below, to see it full size):

image001

Below is a more detailed distribution for the top 30 recipient campuses, with comparisons to 2012 and 2013 data. To enable valid comparison, this chart shows the fraction of notices received by each campus in each year, rather than the total. The solid red bars are the campus’s 2014 share, and the lighter blue and green bars are the 2012 and 2013 shares. The hollow bar for each campus is the incidence of detected infringement, defined as the number of 2014 notices per thousand headcount students.

image003

As in earlier analyses, there is an important distinction between campuses whose high volume of notices stems largely from their size, and those where it stems from a combination of size and incidence—that is, the ratio of notices received to enrollment.

In the graph, Carbon and Nitrogen are examples of the former: they are both very large public urban universities enrolling over 50,000 students, but with relatively low incidence of around 7 notices per thousand students. They stand in marked contrast to incidences of 20-60 notices per thousand students at Lithium, Boron, Neon, Magnesium, Aluminum, and Silicon, each of which enrolls 10-25,000 students—all private except Aluminum.

Changes over Time

The overall volume of infringement notices varies from time to time depending on how much effort copyright holders devote to searching for infringement (effort costs money), and to a lesser extent based on which titles they use to seed searches. The volume of notices sent to campuses varies accordingly. However, the distribution of notices across campuses should not be affected by the total volume. To analyze trends, therefore, it is important to use a metric independent of total volume.

As in the preceding section, I used the fraction of all campus notices each campus received for each period. The top two campuses were the same in all three years: Hydrogen was highest in 2012 and 2014, and Helium was highest in 2013.

Only five campuses received at least 1.5% of all notices in more than one year:

image005

These campuses consistently stand at the top of the list, account for a substantial fraction of all infringement notices, and except for Beryllium have incidence over 20. As I argue below, it makes sense for copyright holders to engage them directly, to help them understand how different they are from their peers, and perhaps to persuade them to better “effectively combat” infringement from their networks by adopting policies and practices from their low-incidence peers.

Aside from these five campuses, there is great year-to-year variation in how many notices campuses receive. Below, for example, is a similar graph for the approximately 50 campuses receiving 0.5%-1.5% of all notices in at least one of the three years. Such year-to-year variation makes engagement much more difficult to target efficiently and much less likely to have discernible effects.

image007

Relationships

Size

All else equal, if infringement is the same across campuses and campuses take equally effective measures to prevent it from reaching the Internet, then the volume of detected infringement should generally vary with campus size. That this is only moderately the case implies that student behavior varies from campus to campus and/or that campuses’ “effectively combat” measures are different and have different effects.

Here are data for the 100 campuses receiving the most infringement notices in 2014:

image009

It appears visually that the overall correlation between campus size and notice volume is modest (and indeed r=0.29) because such a large volume of notices went to Hydrogen and Helium, which are not the largest campuses.

However, the correlation is slightly lower if those two campuses are omitted. This is because Lithium has the next highest volume, yet is of average size, and Manganese, the largest campus in the group, with over 70,000 students, had very low incidence of 2 notices per thousand students. (I’ve spoken at length with the CIO and network-security head at Manganese, and learned that its anti-infringement measures comprise a full array of policies and practices: blocking of peer-to-peer protocols at the campus border, with well-established exception procedures; active followthrough on infringement notices received; and direct outreach to students on the issue.)

Residence

If students live on campus, then typically their network connection is through the campus network, their detectable infringement is attributed to the campus, and that’s where the infringement notice goes. If students live off campus, then they do not use the campus network, and infringement notices go to their ISP. This is why most infringement notices go to campuses with dorms, even though the behavior of their students probably resembles that of their nonresidential peers.

For the same reason, we might expect that residentially intensive campuses (measured by the ratio of dormitory beds to total enrollment) would have a higher incidence of detectable infringement, all else equal, than less residential campuses. Here are data for the 100 campuses receiving the most infringement notices:

image011

The relationship is positive, as expected, and relatively strong (r=.58). It’s important, though, to remember that this relationship between campus attributes (residential intensity and the incidence of detected infringement) does not necessarily imply a relationship between student attributes such as living in dorms and distributing infringing material. Drawing inferences about individuals from data about groups is the “ecological fallacy.”

Affluence

One hears arguments that infringement varies with affluence, that is, that students with less money are more likely to infringe. There’s no way to assess that directly with these data, since they do not identify individuals. However, IPEDS campus data include the fraction of students receiving Federal grant aid, which varies inversely with income. The higher this fraction, the less affluent, on average, the student body should be. So it’s interesting to see how infringement (measured by incidence rather than volume) varies with this metric:

image013

The relationship is slightly negative (r=-.12), in large part because of Polonium, a small private college with few financial-aid recipients that received 83 notices per 1000 students in 2014. (Its incidence was similar in 2012, but much lower in 2013.) Even without Polonium, however, the relationship is small.

For the same reason, we might expect a greater incidence of detected infringement on less expensive campuses. The data:

image015

Once again the relationship is the opposite (r=.54), largely because most campuses have both low tuition and low incidence.

Campus Interactions

Following the 2012 and 2013 studies, I communicated directly with IT leaders at several campuses with especially high volumes of infringement notices. All save one (Hydrogen) of these interactions were informative, and several appear to have influenced campus policies and practices for the better.

  • Helium. Almost all of Helium’s notices are associated with a small, consecutive group of IP addresses, presumably the external addresses for a NAT-mediated campus wireless network. I learned from discussions with Helium’s CIO that the university does not retain NAT logs long enough to identify wireless users when infringement notices are received; as a result, few infringement notices reach offenders, and so they have little impact directly or indirectly. Helium apparently understands and recognizes the problem, but replacing its wireless logging systems is not a high priority project.
  • Hydrogen. Despite diverse direct, indirect, and political efforts to engage IT leaders at Hydrogen, I was never able to open discussions with them. I do not understand why the university receives so many notices (unlike Helium’s, they are not concentrated), and was therefore unable to provide advice to the campus. It is also unclear whether the notices sent to Hydrogen are associated with its small-city main campus or with its more urban branch campus.
  • Krypton. Krypton used to provide guests up to 14 days of totally unrestricted and anonymous use of its wired and wireless networks. I believe that this led to its high rate of detected infringement. More recently, Krypton implemented a separate guest wireless network, which is still anonymous but apparently is either more restricted or is routed to an external ISP. I believe that this change is why Krypton is no longer in the top 20 group in 2014. (Krypton still offers unrestricted 14-day access to its wired network.)
  • Lithium. The network-security staff at Lithium told me that there are plans to implement better filtering and blocking on their network, but that implementation has been delayed.
  • Nitrogen. Nitrogen enrolls over 50,000 students, more than almost any other campus. As I pointed out above, although Nitrogen’s infringement notice counts are substantial, they are actually relatively low when adjusted for enrollment.
  • Gallium. I discussed Gallium’s high infringement volume with its CIO in early 2013. She appeared to be surprised that the counts were so high, and that they were not all associated with Gallium affiliate campuses, as the university had previously believed. Although the CIO was noncommittal about next steps, it appears that something changed for the better.
  • Palladium. The Palladium CIO attended a Symposium I hosted in March 2013, and while there he committed to implementing better controls at the University. The CIO appears to have followed through on this commitment.
  • No Alias. Although it doesn’t appear in the graph, No Alias is an interesting story. It ranked very high in the 2012 study. NA, it turns out, provides exit connections for the Tor network, which means that some traffic that appears to originate at NA in fact originates from anonymous users elsewhere. Most of NA’s 2012 notices were associated with the Tor connections, and I suggested to NA’s security officer that perhaps No Alias might impose some modest filters on those. It appears that this may have happened, and may be why NA dropped out of the top group.

I also interacted with several other campuses that ranked high in 2013. In many of these conversations I was able to point IT staff to specific problems or opportunities, such as better configuring firewalls. Most of these campuses moved out of the top group.

And So…

The 2014 DMCA notice data reinforce earlier implications (from both data and direct interactions) for campus/industry interactions. Copyright holders should interact directly with the few institutions that rank consistently high, and with large residential institutions that rank consistently low. In addition, copyright holders should seek opportunities to better understand how best to influence student behavior, both during and after college.

Conversely, campuses that receive disproportionately many notices, and so give higher education a bad reputation with regard to copyright infringement, should consult peers at the other end of the distribution, and identify reasonable ways to improve their policies and practices.

9|4|14 gj-c

 

Revisiting IT Policy #1: Network Neutrality

The last time I wrote about network neutrality, higher education was deeply involved in the debate, especially through the Association of Research Libraries and EDUCAUSE, whose policy group I then headed. We supported a proposal by the then Federal Communications Commission (FCC) chairman, Julius Genachowski, to require public non-managed last-mile networks to transmit end-user Internet traffic neutrally.

We worried that otherwise those networks might favor commercial over intellectual content, and so make it difficult for off-campus students to access course, library, and other campus content, and for campus entities such as libraries to access content on other campuses or in central shared repositories. (The American Library Association had similar worries on behalf of public libraries and their patrons.) Almost as a footnote, we opposed so-called “paid prioritization”, an ill-defined concept, rarely implemented, but now reborn as “Internet fast lanes”.

Although courts overturned the FCC on neutrality, for the most part its key principle has held: traffic should flow across the Internet without regard for its source, its destination, or its content.

But the paid-prioritization footnote is pushing its way back into the main text. It’s doing so in a particularly arcane way, but one that may have serious implications for higher education. Understanding this requires some definitions. After addressing those (as Steve Worona points out, an excellent Wired article has even more on how the Internet, peering, and content delivery networks work), I’ll  turn to current issues and higher education’s interests.

What Is Network Neutrality?

To be “neutral”, in the FCC’s earlier formulation, a network must transmit public Internet traffic equivalently without regard for its source, its destination, or its content. Public Internet traffic means traffic that involves externally accessible IP addresses. A network can discriminate on the basis of type–for example, treat streaming video differently from email. But a neutral network cannot discriminate on source, destination, or content within a given type of traffic. A network can  treat special traffic such as cable TV programming or cable-based telephony–“managed services”, in the jargon–differently than regular public Internet traffic, although this is controversial since the border is murky. More controversial still, given current trends, is the exclusion of cellular wireless Internet traffic (but not WiFi) from neutrality requirements.

Pipes

The word “transmit” is important, because it’s different from “send” and “receive”. Users connect computers, servers, phones, television sets, and other devices to networks. They choose and pay for the capacity of their connection (the “pipe”, in the usual but imperfect plumbing analogy) to send and receive network traffic. Not all pipes are the same, and it’s perfectly acceptable for a network to provide lower-quality pipes–slower, for example–to end users who pay less, and to charge customers differently depending on where they are located. But a neutral network must provide the same quality of service to those who pay for the same size, quality, and location of “pipe”.

A user who is mostly going to send and receive small amounts of text (such as email) can get by with very modest and inexpensive capacity. One who is going to view video needs more capacity, one who is going to use two-way videoconferencing needs even more, and a commercial entity that is going to transmit multiple video streams to many customers needs lots. Sometimes the capacity of connections is fixed–one pays for a given capacity regardless of whether one uses it all–and sometimes their capacity and cost adjust dynamically with use. But in all cases one is merely paying for a connection to the network, not for how quickly traffic will get to or arrive from elsewhere. That last depends on how much someone is paying at the other end, and on how well the intervening networks interconnect. Whether one can pay for service quality other than the quality of one’s own connection is central to the current debate.

Users

It’s also important to consider two different (although sometimes overlapping) kinds of users: “end users” and “providers”. In general, providers deliver services to end users, sometimes content (for example, Netflix, the New York Times, or Google Search), sometimes storage (OneDrive, Dropbox), sometimes communications (Gmail, Xfinity Connect), and sometimes combinations of these and other functionality (Office Online, Google Apps).

The key distinctions between providers and end users are scale and revenue flow. The typical provider serves thousands if not millions of end users; the typical end user uses more than a few but rarely more than a few hundred providers. End users provide revenue to providers, either directly or by being counted; providers receive revenue (or sometimes other value such as fame) from end users or advertisers, and use it to fund the services they provide.

Roles

Networks (and therefore network operators) can play different roles in transmission: “first mile”, “last mile”, “backbone”, and “peering”. Providers connect to first-mile networks. End users do the same to last-mile networks. (First-mile and last-mile networks are mirror images of each other, of course, and can swap roles, but there’s always one of each for any traffic.) Sometimes first-mile networks connect directly to last-mile networks, and sometimes they interconnect indirectly using backbones, which in turn can interconnect with other backbones. Peering is how first-mile, last-mile, and backbone networks interconnect.

To use another imperfect analogy, first mile networks are on-ramps to backbone freeways, last-mile networks are off-ramps, and peering is where freeways interconnect. But here’s why the analogy is  imperfect: sometimes providers connect directly to backbones, and sometimes first-mile and last-mile networks have their own direct peering interconnections, bypassing backbones. Sometimes, as the Wired article points out, providers pay last-mile networks to host their servers, and sometimes special content-distribution systems such as Akamai do roughly the same. Those imperfections account for much of the current controversy.

Consider how I connect the Mac on my desk in Comcast‘s downtown office (where a few of us from NBCUniversal also work) to hostmonster.com, where this blog lives. I connect to the office wireless, which gives me a private (10.x.x.x) IP address. That goes to an internal (also private) router in Philadelphia, which then connects to Comcast’s public network. Comcast, as the company’s first-mile network, takes the traffic to Pennsylvania, then to Illinois, then back east to Virginia. There Comcast has a peering connection to Cogent, which is Hostmonster’s first-mile network provider. Cogent carries my traffic from Virginia to Illinois, Missouri, Colorado, and Utah, where Hostmonster is located and connects to Cogent.

If Comcast and Cogent did not have a direct connection, then my traffic would flow through a backbone such as Level3. If Hostmonster placed its servers in Comcast data centers, my traffic would be all-Comcast. As I’ll note repeatedly, this issue–how first-mile, last-mile, and backbones peer, and how content providers deal with this–is driving much of today’s network-neutrality debate. So is the increasing consolidation of the last-mile network business.

Public/Private

“Public” networks are treated differently than “private” ones. Generally speaking, if a network is open to the general public, and charges them fees to use it, then it’s a public network. If access is mostly restricted to a defined, closed community and does not charge use fees, then it’s a private network. The distinction between public and private networks comes mostly from the Communications Assistance to Law Enforcement Act (CALEA), which took effect in 1995. CALEA required “telecommunications carriers” to assist police and other law enforcement, notably by enabling court-approved wiretaps.

Even for traditional telephones, it was not entirely clear which “telecommunications carriers” were covered–for example, what about campus-run internal telephone exchanges?–and as CALEA extended to the Internet the distinction became murkier. Eventually “open to the general public, and charges them fees” provided a practical distinction, useful beyond CALEA.

Most campus networks are private by this definition. So are my home network, the network here in the DC Comcast office, and the one in my local Starbucks. To take the roadway analogy a step further, home driveways, the extensive network of roads within gated residential communities (even a large one such as Kiawah Island), and roadways within large industrial facilities (such as US Steel’s Gary plant) are private. City streets, state highways, and Interstates are public. (Note that the meaning of “public network” in Windows, MacOS, or other security settings is different.)

Neutrality

In practice, and in most of the public debate until recently, the term “network neutrality” has meant this: except in certain narrow cases (such as illegal uses), a neutral-network operator does not prioritize traffic over the last mile to or from an end user according to the source of the traffic, who the end user is, or the content of the traffic. Note the important qualification: “over the last mile”.

An end user with a smaller, cheaper connection will receive traffic more slowly than one who pays for a faster connection, and the same is true for providers sending traffic. The difference may be more pronounced for some types of traffic (such as video) than for others (email). Other than this, however, a neutral network treats all traffic the same. In particular, the network operator does not manipulate the traffic for its own purposes (such as degrading a competitor’s service), and does not treat end users or providers differently except to the extent they pay for the speed or other qualities of their own network connections.

“Public” networks often claim to be neutral, at least to some degree; “private” ones rarely do. Most legislative and regulatory efforts to promote network neutrality focus on public networks.

Enough definition. What does this all mean for higher education, and in particular how is that meaning different from what I wrote about back in 2011?

The Rebirth of Paid Prioritization

Where once the debate centered on last-mile neutrality for Internet traffic to and from end users, which is relatively straightforward and largely accepted, it has now expanded to include both Internet and “managed services” over the full path from provider to end user, which is much more complicated and ambiguous.

An early indicator was AT&T’s proposal to let providers subsidize the delivery of their traffic to AT&T cellular-network end users, specifically by allowing providers to pay the data costs associated with their services to end users. That is, providers would pay for how traffic was delivered and charged to end users. This differs fundamentally from the principle that the service end users receive depends only on what end users themselves pay for. Since cellular networks are not required to be neutral, AT&T’s proposal violated no law or regulation, but it nevertheless triggered opposition: It implied that AT&T’s customers would receive traffic (ads, downloads, or whatever) from some providers more advantageously–that is, more cheaply–than equivalent traffic from other providers. End user would have no say in this, other than to change carriers. Thus far AT&T’s proposal has attracted few providers, but this may be changing.

Then came the running battles between Netflix, a major provider, and last-mile providers such as Comcast and Verizon. Netfllix argued that end users were receiving its traffic less expeditiously than other providers’ traffic, that this violated neutrality principles, and that last-mile providers were responsible for remedying this. The last-mile providers rejected this argument: in their view the problem arose because Netfllix’s first-mile network (as it happens, Cogent, the same one Hostmonster uses) was unwilling to pay for peering connections capable of handling Netflix’s traffic (which can amount to more than a quarter of all Internet traffic some evenings). In the last-mile networks’ view, Netflix’s first-mile provider was responsible for fixing the problem at its (and therefore presumably Netflix’s) expense. The issue is, who pays to ensure sufficient peering capacity? Returning to the highway metaphor, who pays for sufficient interchange ramps between toll roads, especially when most truck traffic is in one direction?

In the event Netflix gave in, and arranged (and paid for) direct first-mile connections to Comcast, Verizon, and other last-mile providers. But Netflix continues to press its case, and its position has relevance for higher education.

Colleges and Universities

Colleges and universities have traditionally taken two positions on network neutrality. Representing end users, including their campus community and distant students served over the Internet, higher education has taken a strong position in support of the FCC’s network-neutrality proposals, and even urged that they be extended to cover cellular networks. As operators of networks funded and designed to support campuses’ instructional, research, and administrative functions, however, higher education also has taken the position that campus networks, like home, company, and other private networks, should continue to be exempted from network-neutrality provisions.

These remain valid positions for higher education to take in the current debate, and indeed the principles recently posted by EDUCAUSE and various other organizations do precisely that. But the emergence of concrete paid-prioritization services may require more nuanced positions and advocacy.  This is partly because the FCC’s positions have shifted, and partly because the technology and the debate have evolved.

Why should colleges and universities care about this new network-neutrality battleground? Because in addition to representing end users and operating private networks, campuses are increasingly providing instruction to distant students over the Internet. Massively open online courses (MOOCs) and other distance-education services often involve streamed or two-way video. They therefore require high-quality end-to-end network connections.

In most cases, campus network traffic to distant student flows over the commercial Internet, rather than over Internet2 or regional research and education (R&E) networks. Whether it reaches students expeditiously depends not only on the campus’s first-mile connection (“first mile” rather than “last mile” because the campus is now a provider rather than simply representing end users), but also on how the campus’s Internet service provider connects to backbones and/or to students’ last-mile networks–and of course on whether distant students have paid for good enough connections. This is similar to Netflix’s situation.

Unlike Netflix, however, individual campuses probably cannot afford to pay for direct connections to all of their students’ last-mile networks, or to place servers in distant data centers. They thus depend on their first-mile networks’ willingness to peer effectively with backbone and last-mile networks. Yet campuses are rarely major customers of their ISPs, and therefore have little leverage to influence ISPs’ backbone and peering choices. Alternatively, campuses can in theory use their existing connections to R&E networks to deliver instruction. But this is only possible if those R&E networks peer directly and capably with key backbone and last-mile providers. R&E networks generally have not done this.

Here’s what this all means: Higher education needs to continue supporting its historical positions promoting last-mile neutrality and seeking private-network exemptions for campus networks. But colleges and universities also need to work together to make sure their instructional traffic will continue to reach distant students. One way to achieve this is by opposing paid prioritization, of course. But FCC and other regulations may permit limited paid prioritization, or technology may as usual stay one step ahead of regulation. Higher education must figure out the best ways to deal with that, and collaborate to make them so.

 

 

 

 

Notes From (or is it To?) the Dark Side

“Why are you at NBC?,” people ask. “What are you doing over there?,” too, and “Is it different on the dark side?” A year into the gig seems a good time to think about those. Especially that “dark side” metaphor.  For example, which side is “dark”?

This is a longer-than-usual post. I’ll take up the questions in order: first Why, then What, then Different; use the links to skip ahead if you prefer.

Why are you at NBC?

5675955This is the first time I’ve worked at a for-profit company since, let’s see, the summer of 1967: an MIT alumnus arranged an undergraduate summer job at Honeywell‘s Mexico City facility. Part of that summer I learned a great deal about the configuration and construction of custom control panels, especially for big production lines. I think of this every time I see photos of big control panels, such as those at older nuclear plants—I recognize the switch types, those square toggle buttons that light up. (Another part of the summer, after the guy who hired me left and no one could figure out what I should do, I made a 43½-foot paper-clip chain.)

One nice Honeywell perk was an employee discount on a Pentax 35mm SLR with a 40mm and 135mm lenses, which I still have in a box somewhere, and which still works when I replace the camera’s light-meter battery. (The Pentax brand belonged to Honeywell back then, not Ricoh.) Excellent camera, served me well for years, through two darkrooms and a lot of Tri-X film. I haven’t used it since I began taking digital photos, though.

5499942818_d3d9e9929b_nI digress. Except, it strikes me, not really. One interesting thing about digital photos, especially if you store them online and make most of them publicly visible (like this one, taken on the rim of spectacular Bryce Canyon, from my Backdrops collection), is that sometimes the people who find your pictures download them and use them for their own purposes. My photos carry a Creative Commons license specifying that although they are my intellectual property, they can be used for nonprofit purposes so long as they are attributed to me (an option not available, apparently, if I post them on Facebook instead).

So long as those who use my photos comply with the CC license requirement, I don’t require that they tell me, although now and then they do. But if people want to use one of my photos commercially, they’re supposed to ask my permission, and I can ask for a use fee. No one has done that for me—I’m keeping the day job—but it’s happened for our son.

dmcaI hadn’t thought much about copyright, permissions, and licensing for personal photos (as opposed to archival, commercial, or institutional ones) back when I first began dealing with “takedown notices” sent to the University of Chicago under the Digital Millennium Copyright Act (DMCA). There didn’t seem to be much of a parallel between commercialized intellectual property, like the music tracks that accounted for most early DMCA notices, and my photos, which I was putting online mostly because it was fun to share them.

Neither did I think about either photos or music while serving on a faculty committee rewriting the University’s Statute 18, the provision governing patents in the University’s founding documents.

sealThe issues for the committee were fundamentally two, both driven somewhat by the evolution of “textbooks”.

First, where is the line between faculty inventions, which belong to the University (or did at the time), and creations, which belong to creators—between patentable inventions and copyrightable creations, in other words? This was an issue because textbooks had always been treated as creations, but many textbooks had come to include software (back then, CDs tucked into the back cover), and software had always been treated as an invention.

Second, who owns intellectual property that grows out of the instructional process? Traditionally, the rights and revenues associated with textbooks, even textbooks based on University classes, belonged entirely to faculty members. But some faculty members were extrapolating this tradition to cover other class-based material, such as videos of lectures. They were personally selling those materials and the associated rights to outside entities, some of which were in effect competitors (in some cases, they were other universities!).

fathomAs you can see by reading the current Statute 18, the faculty committee really didn’t resolve any of this. Gradually, though, it came to be understood  that textbooks, even textbooks including software, were still faculty intellectual property, whereas instructional material other than that explicitly included in traditional textbooks was the University’s to exploit, sell, or license.

With the latter well established, the University joined Fathom, one of the early efforts to commercialize online instructional material, and put together some excellent online materials. Unfortunately, Fathom, like its first-generation peers, failed to generate revenues exceeding its costs. Once it blew through its venture capital, which had mostly come from Columbia University, Fathom folded. (Poetic justice: so did one of the profit-making institutions whose use of University teaching materials prompted the Statute 18 review.)

Gradually this all got me interested in the thicket of issues surrounding campus online distribution and use of copyrighted materials and other intellectual property, and especially the messy question how campuses should think about copyright infringement occurring within and distributed from their networks. The DMCA had established the dual principles that (a) network operators, including campuses, could be held liable for infringement by their network users, but (b) they could escape this liability (find “safe harbor”) by responding appropriately to complaints from copyright holders. Several of us research-university CIOs worked together to develop efficient mechanisms for handling and responding to DMCA notices, and to help the industry understand those and the limits on what they might expect campuses to do.

heoaAs one byproduct of that, I found myself testifying before a Congressional committee. As another, I found myself negotiating with the entertainment industry, under US Education Department auspices, to develop regulations implementing the so-called “peer to peer” provisions of the Higher Education Opportunity Act of 2008.

That was one of several threads that led to my joining EDUCAUSE in 2009. One of several initiatives in the Policy group was to build better, more open communications between higher education and the entertainment industry with regard to copyright infringement, DMCA, and the HEOA requirements.

hero-logo-edxI didn’t think at the time about how this might interact with EDUCAUSE’s then-parallel efforts to illuminate policy issues around online and nontraditional education, but there are important relevancies. Through massively open online courses (MOOCs) and other mechanisms, colleges and universities are using the Internet to reach distant students, first to build awareness (in which case it’s okay for what they provide to be freely available) but eventually to find new revenues, that is, to monetize their intellectual property (in which case it isn’t).

music-industryIf online campus content is to be sold rather than given away, then campuses face the same issues as the entertainment industry: They must protect their content from those who would use it without permission, and take appropriate action to deter or address infringement.

Campuses are generally happy to make their research freely available (except perhaps for inventions), as UChicago’s Statute 18 makes clear, provided that researchers are properly credited. (I also served on UChicago’s faculty Intellectual Property Committee, which among other things adjudicated who-gets-credit conflicts among faculty and other researchers.) But instruction is another matter altogether. If campuses don’t take this seriously, I’m afraid, then as goes music, so goes online higher education.

Much as campus tumult and changes in the late Sixties led me to abandon engineering for policy analysis, and quantitative policy analysis led me into large-scale data analysis, and large-scale data analysis led me into IT, and IT led me back into policy analysis, intellectual-property issues led me to NBCUniversal.

Peacock_CleanupI’d liked the people I met during the HEOA negotiations, and the company seemed seriously committed to rethinking its relationships with higher education. I thought it would be interesting, at this stage in my career, to do something very different in a different kind of place. Plus, less travel (see screwup #3 in my 2007 EDUCAUSE award address).

So here I am, with an office amidst lobbyists and others who focus on legislation and regulation, with a Peacock ID card that gets me into the Universal lot, WRC-TV, and 30 Rock (but not SNL), and with a 401k instead of a 403b.

What are you doing over there?

NBCUniversal’s goals for higher education are relatively simple. First, it would like students to use legitimate sources to get online content more, and illegitimate “pirate” sources less. Second, it would like campuses to reduce the volume of infringing material made available from their networks to illegal downloaders worldwide.

477px-CopyrightpiratesMy roles are also two. First, there’s eagerness among my colleagues (and their counterparts in other studios) to better understand higher education, and how campuses might think about issues and initiatives. Second, the company clearly wants to change its approach to higher education, but doesn’t know what approaches might make sense. Apparently I can help with both.

To lay foundation for specific projects—five so far, which I’ll describe briefly below—I looked at data from DMCA takedown notices.

Curiously, it turned out, no one had done much to analyze detected infringement from campus networks (as measured by DMCA notices sent to them), or to delve into the ethical puzzle: Why do students behave one way with regard to misappropriating music, movies, and TV shows, and very different ways with regard to arguably similar options such as shoplifting or plagiarism? I’ve written about some of the underlying policy issues in Story of S, but here I decided to focus first on detected infringement.

riaa-logoIt turns out that virtually all takedown notices for music are sent by the Recording Industry Association of America, RIAA (the Zappa Trust and various other entities send some, but they’re a drop in the bucket).

MPAAMost takedown notices for movies and some for TV are sent by the Motion Picture Association of America, MPAA, on behalf of major studios (again, with some smaller entities such as Lucasfilm wading in separately). NBCUniversal and Fox send out notices involving their movies and TV shows.

sources chartI’ve now analyzed data from the major senders for both a twelve-month period (Nov 2011-Oct 2012) and a more recent two-month period (Feb-Mar 2013). For the more recent period, I obtained very detailed data on each of 40,000 or so notices sent to campuses. Here are some observations from the data:

  • Almost all the notices went to 4-year campuses that have at least 100 dormitory beds (according to IPEDS). To a modest extent, the bigger the campus the more notices, but the correlation isn’t especially large.
  • Over half of all campuses—even of campuses with dorms—didn’t get any notices. To some extent this is because there are lots and lots of very small campuses, and they fly under the infringement-detection radar. But I’ve learned from talking to a fair number of campuses that, much to my surprise, many heavily filter or even block peer-to-peer traffic at their commodity Internet border firewall—usually because the commodity bandwidth p2p uses is expensive, especially for movies, rather than to deal with infringement per se. Outsourced dorm networks also have an effect, but I don’t think they’re sufficiently widespread yet to explain the data.
  • Several campuses have out-of-date or incorrect “DMCA agent” addresses registered at the Library of Congress. Compounding that, it turns out some notice senders use “abuse” or other standard DNS addresses rather than the registered agent addresses.
  • Among campuses that received notices, a few campuses stand out for receiving the lion’s share, even adjusting for their enrollment. For example, the top 100 or so recipient campuses got about three quarters of the total, and a handful of campuses stand out sharply even within that group: the top three campuses (the leftmost blue bars in the graph below) accounted for well over 10% of the notices. (I found the same skewness in the 2012 study.) With a few interesting exceptions (interesting because I know or suspect what changed), the high-notice groups have been the same for the two periods.

utorrent-facebook-mark-850-transparentThe detection process, in general, is that copyright holders choose a list of music, movie, or TV titles they believe likely to be infringed. Their contractors then use BitTorrent tracker sites and other user tools to find illicit sources for those titles.

For the most part the studios and associations simply look for titles that are currently popular in theaters or from legitimate sources. It’s hard to see that process introducing a bias that would affect some campuses so much differently than others. I’ve also spent considerable time looking at how a couple of contractors verify that titles being offered illicitly (that is, listed for download on a BitTorrent tracker site such as The Pirate Bay) are actually the titles being supplied (rather than, say, malware, advertising, or porn), and at how they figure out where to send the resulting takedown notices. That process too seems pretty straightforward and unbiased.

argo-15355-1920x1200Sender choices clearly can influence how notice counts vary from time to time: for example, adding a newly popular title to the search list can lead to a jump in detections and hence notices. But it’s hard to see how the choice of titles would influence how notice counts vary from institution to institution.

This all leads me to believe that takedown notices tell us something incomplete but useful about campus policies and practices, especially at the extremes. The analysis led directly to two projects focused on specific groups of campuses, and indirectly to three others.

Role Model Campuses

Based on the results of the data analysis, I communicated individually with CIOs at 22 campuses that received some but relatively few notices: specifically, campuses that (a) received at least one notice (and so are on the radar) but (b) fewer than 300 and fewer than 20 per thousand student headcount, (c) have at least 7,500 headcount students, and (d) have at least 10,000 dorm beds (per IPEDS) or sufficient dorm beds to house half your headcount. (These are Group 4, the purple bars in the graph below. The solid bars represent total notices sent, and the hollow bars represent incidence, or notices per thousand headcount students. Click on the graph to see it larger.)

I’ve asked each of those campuses whether they’d be willing to document their practices in an open “role models” database developed jointly by the campuses and hosted by a third party such as groups charta higher-education association (as EDUCAUSE did after the HEOA regulations took effect). The idea is to make a collection of diverse effective practices available to other campuses that might want to enhance their practices.

High Volume Campuses

Separately, I communicated privately with CIOs at 13 campuses that received exceptionally many notices, even adjusting for their enrollment (Group 1, the blue bars in the graph). I’ve looked in some detail at the data for those campuses, some large and some small, and in some cases that’s led to suggestions.

For example, in a few cases I discovered that virtually all of a high-volume campus’s notices were split evenly among a small number of consecutive IP addresses. In those cases, I’ve suggested that those IP addresses might be the front-end to something like a campus wireless network. Filtering or blocking p2p (or just BitTorrent) traffic on those few IP addresses (or the associated network devices) might well shrink the campus’s role as a distributor without affecting legitimate p2p or BitTorrent users (who tend to be managing servers with static addresses).

Symposia

Back when I was at EDUCAUSE, we worked with NBCUniversal to host a DC meeting between senior campus staff from a score of campuses nationwide and some industry staff closely involved with the detection and notification for online infringement. The meeting was energetic and frank, and participants from both sides went away with a better sense of the other’s bona fides and seriousness. This was the first time campus staff had gotten a close look at the takedown-notice process since a Common Solutions Group meeting in Ann Arbor some years earlier; back then the industry’s practices were much less refined.

university-st-thomas-logo-white croppedBased on the NBCUniversal/EDUCAUSE experience, we’re organizing a series of regional “Symposia” along these lines on campuses in various cities across the US. The objectives are to open new lines of communication and to build trust. The invitees are IT and student-affairs staff from local campuses, plus several representatives from industry, especially the groups that actually search for infringement on the Internet. The first was in New York, the second in Minneapolis, the third will be in Philadelphia, and others will follow in the West, the South, and elsewhere in the Midwest.

Research

We’re funding a study within a major state university system to gather two kinds of data. Initially the researchers are asking each campus to describe the measures it takes to “effectively combat” copyright infringement: its communications with students, its policies for dealing with violations, and the technologies it uses. The data from the first phase will help enhance a matrix we’ve drafted outlining the different approaches taken by different campuses, complementing what will emerge from the “role models” project.

Based on the initial data, the researchers and NBCUniversal will choose two campuses to participate in the pilot phase of the Campus Online Education Initiative (which I’ll describe next). In advance of that pilot, the researchers will gather data from a sample of students on each campus, asking about their attitudes toward and use of illicit and legitimate online sources for music, movies, and video. They’ll then repeat that data collection after the pilot term.

Campus Online Entertainment Initiative

Last but least in neither ambition nor complexity, we’re crafting a program that will attempt to address both goals I listed earlier: encouraging campuses to take effective steps to reduce distribution of infringing material from their networks, and helping students to appreciate (and eventually prefer) legitimate sources for online entertainment.

maxresdefaultWorking with Universal Studios and some of its peers, we’ll encourage students on participating campuses to use legitimate sources by making a wealth of material available coherently and attractively—through a single source that works across diverse devices, and at a substantial discount or with similar incentives.

Participating campuses, in turn, will maintain or implement policies and practices likely to shrink the volume of infringing material available from their networks. In some cases the participating campuses will already be like those in the “role models” group; in others they’ll be “high volume” or other campuses willing to  adopt more effective practices.

I’m managing these projects from NBCUniversal’s Washington offices, but with substantial collaboration from company colleagues here, in Los Angeles, and in New York; from Comcast colleagues in Philadelphia; and from people in other companies. Interestingly, and to my surprise, pulling this all together has been much like managing projects at a research university. That’s a good segue to the next question.

Is it different on the dark side?

IMG_1224Newly hired, I go out to WRC, the local NBC affiliate in Washington, to get my NBCUniversal ID and to go through HR orientation. Initially it’s all familiar: the same ID photo technology, the same RFID keycard, the same ugly tile and paint on the hallways, the same tax forms to be completed by hand.

But wait: Employee Relations is next door to the (now defunct) Chris Matthews Show. And the benefits part of orientation is a video hosted by Jimmy Fallon and Brian Williams. And there’s the possibility of something called a “bonus”, whatever that is.

Around my new office, in a spiffy modern building at 300 New Jersey Avenue, everyone seems to have two screens. That’s just as it was in higher-education IT. But wait: here one of them is a TV. People watch TV all day as they work.

Toto, we’re not in higher education any more.

IMG_1274It’s different over here, and not just because there’s a beautiful view of the Capitol from our conference rooms. Certain organizational functions seem to work better, perhaps because they should and in the corporate environment can be implemented by decree: HR processes, a good unified travel arrangement and expense system, catering, office management. Others don’t: there’s something slightly out of date about the office IT, especially the central/individual balance and security, and there’s an awful lot of paper.

Some things are just different, rather than better or not: the culture is heavily oriented to face-to-face and telephone interaction, even though it’s a widely distributed organization where most people are at their desks most of the time. There’s remarkably little email, and surprisingly little use of workstation-based videoconferencing. People dress a bit differently (a maitre d’ told me, “that’s not a Washington tie”).

But differences notwithstanding, mostly things feel much the same as they did at EDUCAUSE, UChicago, and MIT.

tiny NBCUniversal_violet_1030Where I work is generally happy, people talk to one another, gossip a bit, have pizza on Thursdays, complain about the quality of coffee, and are in and out a lot. It’s not an operational group, and so there’s not the bustle that comes with that, but it’s definitely busy (especially with everyone around me working on the Comcast/Time Warner merger). The place is teamly, in that people work with one another based on what’s right substantively, and rarely appeal to authority to reach decisions. Who trusts whom seems at least as important as who outranks whom, or whose boss is more powerful. Conversely, it’s often hard to figure out exactly how to get something done, and lots of effort goes into following interpersonal networks. That’s all very familiar.

MIT_Building_10_and_the_Great_Dome,_Cambridge_MAI’d never realized how much like a research university a modern corporation can be. Where I work is NBCUniversal, which is the overarching corporate umbrella (“Old Main”, “Mass Hall”, “Building 10”, “California Hall”, “Boulder”) for 18 other companies including news, entertainment, Universal Studios, theme parks, the Golf Channel, Telemundo (which are remarkably like schools and departments in their varied autonomy).

Meanwhile NBCUniversal is owned by Comcast—think “System Central Office”. Sure, these are all corporate entities, and they have concrete metrics by which to measure success: revenue, profit, subscribers, viewership, market share. But the relationships among organizations, activities, and outcomes aren’t as coherent and unitary as I’d expected.

Dark or Green?

So, am I on the dark side, or have I left it behind for greener pastures? Curiously, I hear both from my friends and colleagues in higher education: Some of them think my move is interesting and logical, some think it odd and disappointing. Curioser still, I hear both from my new colleagues in the industry: Some think I was lucky to have worked all those decades in higher education, while others think I’m lucky to have escaped. None of those views seems quite right, and none seems quite wrong.

The point, I suppose, is that simple judgments like “dark” and “greener” underrepresent the complexity of organizational and individual value, effectiveness, and life. Broad-brush characterizations, especially characterizations embodying the ecological fallacy, “…the impulse to apply group or societal level characteristics onto individuals within that group,” do none of us any good.

It’s so easy to fall into the ecological-fallacy trap; so important, if we’re to make collective progress, not to.

Comments or questions? Write me: greg@gjackson.us

(The quote is from Charles Ess & Fay Sudweeks, Culture, technology, communication: towards an intercultural global village, SUNY Press 2001, p 90. Everything in this post, and for that matter all my posts, represents my own views, not those of my current or past employers, or of anyone else.)

3|5|2014 11:44a est

Streaming TV: New Tricks and Old Problems

I like to read mysteries. No surprise, I also watch lots of TV cop shows and mysteries.

poirotSome good reads turn out to be not-so-good TV, and vice versa. Ian Rankin‘s Rebus mysteries and various of Peter Lovesey‘s are an example of the former, and, in my view at least, David Suchet’s Poirot is a lot more interesting and entertaining that Agatha Christie‘s (for that matter, so is Albert Finney‘s). The same is generally true of Leo McKern’s Rumpole of the Bailey compared to John Mortimer‘s.

Of course there are lots of good-good examples (the adaptations of P.D. James and Dorothy Sayers, which both read and play well, in part because the adaptations are just that, rather than renditions), and plenty of not-not (for example, again in my view, the MidSomer Murders series based on Caroline Graham‘s books—not, mind, that this stopped me from watching all 70+ episodes of the British series—and the VI Warshawski series based on Sara Paretsky‘s work, which somehow never drew me in despite the Chicago location).

1Then there are TV shows that don’t have book counterparts, and for that matter aren’t exactly cop shows or mysteries. Barney Miller comes to mind, as does Hill Street Blues.

The trigger for today’s rumination, New Tricks, is one of those not-exactlys. The BBC describes New Tricks as a “drama series about an eccentric group of veteran police detectives reopening cold cases.” Which it is, but as is the case with much detective fiction, the plot is simply a maguffin to draw us into the characters and their relationships.

The backstory is this: a disgraced Detective Superintendent has shot a dog, and is working her way out the doghouse (sorry, couldn’t resist that) by leading a squad comprising three retired detectives—one who converses regularly with his dead wife, another a womanizing rule-bender who maintains cordial relationships with several ex-wives and their daughters, and an ex-alcoholic third who lives on the medication-honed mental edge between paranoid delusion and photographic memory.

HouseOfCardsIt’s interesting how little TV watching these days is based on broadcast or cable schedules. We routinely time-shift using the Xfinity (ne Comcast) On Demand services, and we also do considerable binge watching. The first time we binged was back in the red-envelope days, the entirety of Jewel in the Crown over four intense evenings. The most recent efforts were re-watching the British House of Cards in preparation for the Netflix House of Cards. (Both series are superb.)

Usually, though, we don’t exactly binge; rather, we get into a particular show, and then watch one or two episodes per evening until we’ve used it up. That’s how we did MidSomer Murders a while back (despite its endless village fetes), then the excellent George Gently and Vera, and more recently the outstanding and very French Spiral (L’Engrenage).

Some shows we delve into for a while, and then put on hiatus when they become overwhelming or repetitive—the Danish series The Eagle and the inimitable Larry Sanders Show are in that category. That’s probably what would happen with The Phil Silvers Show, which my father loved, and recordings of which supposedly were destroyed in a fire. Your Show of Shows. Rowan and Martin’s Laugh-In.

But I digress. Friends told us about New Tricks, and so we went looking for it. A local PBS station supposedly is planning to air some of it, but we figured this was a show we should watch from the beginning, and so we wanted the earlier episodes.

Which finally brings me to the actual topic for today: What it took to find and watch New Tricks says a great deal about what needs to improve if the online-viewing marketplace is to succeed. Here’s how the quest went:

  1. xfinity-LogoXfinity On Demand. Searching isn’t easy through the TV: you need to move a cursor around letter by letter using the remote. Using the Xfinity app on a phone or tablet is much easier, since one can just type in a search term to find a show’s schedule. (The app can even tell the cable box to change channels!) Anyway, no luck—Comcast doesn’t have New Tricks.
  2. Netflix. Of course the Comcast cable box doesn’t do Netflix (the fact that I said “of course” is telling—imagescompetition most definitely trumps consolidated customer convenience), so I had to switch to one of our two Netflix-enabled devices (the Sony BluRay player or the AppleTV, both connected to our home network), after first telling the TV to use the appropriate HDMI input. Search using the BluRay or AppleTV remote isn’t any easier than with the Comcast box, but at least the Netflix app has good search tools. However, it tells me that “New Tricks is unavailable to stream” (and then suggests George Gently or MidSomer Murders). So, no luck again.
  3. imagesAmazon Instant Video. That only works on the BluRay player, not the AppleTV, and its app is a bit awkward, so tired of moving the cursor around I go right to the Amazon website on a computer. No luck on Prime Instant Video, the flat-rate subscription service I get by being a Prime customer—but at least Amazon offers an alternative, albeit for streaming purchase (not rental) at $4 per episode or $20 per season. The 8 seasons available for streaming would cost me $160 that way, still cheaper than the $240 I’d pay for DVDs (except there’s a DVD lagniappe: Season 9 is available!).
  4. Hulu (which I don’t subscribe to, so searched only for completeness): No luck.
  5. mpaaI’d heard about a new site sponsored by MPAA, wheretowatch.org, and so figured it was worth a try. Unfortunately, there’s no search tool on that website; instead, it points me to six other search sites. Among those, Flixster and Movies.com return no results. Jinni tells me I can rent the DVD from Netflix. TV Guide tries to take me to a www.tvshowsondvd.com website, presumably so I will buy the DVD, but NBCUniversal’s network malware filter blocks the site with a scary popup message, and being a good network citizen I accept my colleagues’ judgment and don’t bypass the block. Zap2it points me to the $4/episode Amazon offering. Finally, TV.com tells me that I could have watched the show in October 2012 (but only on the BBC, in the UK). None of that is helpful.
  6. Google_logoFinally I do what I’d usually do first: use Google. The search term “new tricks tv streaming” brings up several links, many of them to copyright-infringing, pirate sites. However, the first link, the legitimate ovguide.com, takes me back to Amazon’s $4 offering. And the second is the mother lode: Idaho Public TV has several seasons of New Tricks available for public, free streaming.

Of course, I want to watch on the TV, not on my iPad. A little more technology solves this problem: I switch our TV’s input to the AppleTV box, start up a New Tricks episode on the iPad, and AirPlay almost automatically redirects the video and sound to our TV. Very cool. (Trying this on an Amazon-hosted show, however, I discover that in some cases, apparently for licensing reasons, Airplay weirdly plays the audio track on the TV set but the video on the iPad—why, I wonder, do that rather than Just Say No, or play everything on the iPad?)

Two observations.

  • 51rnWCEckKL._SX500_First, it can be really hard and confusing to find video material online. There’s no overall search engine that covers all sources, so far as I can tell—at least, no legitimate overall search engine. (Although Google found what I was looking for, even its results were incomplete, since it didn’t point me to the fee-per-episode Amazon offering, and unfortunately it also suggested several sites offering presumably illegal copies.) Even when one finds material, further searching is sometimes necessary; for example, the excellent cop show Vera has one season available on Netflix, but two on Amazon Prime Instant Video, which you’d never know if you stopped once you found it at Netflix.
  • Second, technology remains as much an obstacle as an enabler. For routine TV viewing at home, we use six devices: Sony and Panasonic TVs, two Comcast/Xfinity Motorola cable boxes, a Sony BluRay player, and an AppleTV. (I supposed our iPads and iPhones should be on this list too. We have two each of those.) Three of the devices are plugged into separate HDMI ports on the TV, and two of them (plus the iOS devices) require connections to our home network, which is connected to Comcast. (As a fringe benefit—I suppose—the landline that’s also provided by Comcast displays caller ID on the TV set, at least when we’re using the cable box; that way we can ignore fundraising calls without looking at our phones. hdmi2Even better, we get caller ID and voicemail for our landline on a smartphone app. Even when it’s being frustrating, technology can be cool. But I digress again.) Each TV-related device has a remote, with only partially overlapping functionality—for example, the BluRay’s remote can change the TV’s inputs and adjust volume (the Comcast remote can also change the TV’s inputs, albeit awkwardly) but the BluRay remote can’t change cable channels. The AppleTV remote can only control that device, so watching TV through the AppleTV almost always requires using two remotes, one to choose materials and pause the video on the AppleTV, and the other to adjust
    TV volume. When the technology all works, it’s very nice. When it doesn’t, debugging is a nightmare: pressing buttons on different remotes, jiggling cables, checking Internet connections, and so forth. We tried a universal remote for a while, with the same result: great when it worked, nightmare when it didn’t.

Complexity like this is unfortunate, frustrating, and counterproductive, but perhaps, barring change in the economy of entertainment, unavoidable. Sadly, it deters consumption, especially legitimate consumption. The usual ways out of this—common standards, competition on quality and price, such as returned somewhat to the music world—have so far proven elusive for online TV watching. That’s in part because providers and distributors are quite rationally trying to monetize the material they control, and making it easy for people to find other material, or to change sources, doesn’t achieve that. A true conundrum.

Meanwhile, New Tricks is great fun. We’ve just started Season 4…

The Importance of Being Enterprise

…as Oscar Wilde well might have titled an essay about campus-wide IT, had there been such a thing back then.

Enterprise IT it accounts for the lion’s share of campus IT staffing, expenditure, and risk. Yet it receives curiously little attention in national discussion of IT’s strategic higher-education role. Perhaps that should change. Two questions arise:

  • What does “Enterprise” mean within higher-education IT?
  • Why might the importance of Enterprise IT evolve?

What does “Enterprise IT” mean?

Here are some higher-education spending data from the federal Integrated Postsecondary Education Data Service (IPEDS), omitting hospitals, auxiliaries, and the like:

Broadly speaking, colleges and universities deploy resources with goals and purposes that relate to their substantive mission or the underlying instrumental infrastructure and administration.

  • Substantive purposes and goals comprise some combination of education, research, and community service. These correspond to the bottom three categories in the IPEDS graph above. Few institutions focus predominantly on research—Rockefeller University, for example. Most research universities pursue all three missions, most community colleges emphasize the first and third, and most liberal-arts colleges focus on the first.
  • Instrumental activities are those that equip, organize, and administer colleges and universities for optimal progress toward their mission—the top two categories in the IPEDS graph. In some cases, core activities advance institutional mission by providing a common infrastructure for the latter. In other cases, they do it by providing campus-wide or departmental staffing, management, and processes to expedite mission-oriented work. In still other cases, they do it through collaboration with other institutions or by contracting for outside services.

Education, research, and community service all use IT substantively to some extent. This includes technologies that directly or indirectly serve teaching and learning, technologies that directly enable research, and technologies that provide information and services to outside communities—for examples of all three, classroom technologies, learning management systems, technologies tailored to specific research data collection or analysis, research data repositories, library systems, and so forth.

Instrumental functions rely much more heavily on IT. Administrative processes rely increasingly on IT-based automation, standardization, and outsourcing. Mission-oriented IT applications share core infrastructure, services, and support. Core IT includes infrastructure such as networks and data centers, storage and computational clouds, and desktop and mobile devices; administrative systems ranging from financial, HR, student-record, and other back office systems to learning-management and library systems; and communications, messaging, collaboration, and social-media systems.

In a sense, then, there are six technology domains within college and university IT:

  • the three substantive domains (education, research, and community service), and
  • the three instrumental domains (infrastructure, administration, and communications).

Especially in the instrumental domains, “IT” includes not only technology, but also the services, support, and staffing associated with it. Each domain therefore has technology, service, support, and strategic components.

Based on this, here is a working definition: in in higher education,

“Enterprise” IT comprises the IT-related infrastructure, applications, services, and staff
whose primary institutional role is instrumental rather than substantive.

Exploring Enterprise IT, framed thus, entails focusing on technology, services, and support as they relate to campus IT infrastructure, administrative systems, and communications mechanisms, plus their strategic, management, and policy contexts.

Why Might the Importance of Enterprise IT Evolve?

Three reasons: magnitude, change, and overlap.

Magnitude

According data from EDUCAUSE’s Core Data Service (CDS) and the federal Integrated Postsecondary Data System (IPEDS), the typical college or university spends just shy of 5% of its operating budget on IT. This varies a bit across institutional types:

We lack good data breaking down IT expenditures further. However, we do have CDS data on how IT staff distribute across different IT functions. Here is a summary graph, combining education and research into “academic” (community service accounts for very little dedicated IT effort):

Thus my assertion above that Enterprise IT accounts for the lion’s share of IT staffing. Even if we omit the “Management” component, Enterprise IT comprises 60-70% of staffing including IT support, almost half without. The distribution is even more skewed for expenditure, since hardware, applications, services, and maintenance are disproportionately greater in Administration and Infrastructure.

Why, given the magnitude of Enterprise relative to other college and university IT, has it not been more prominent in strategic discussion? There are at least two explanations:

  • relatively slow change in Enterprise IT, at least compared to other IT domains (rapidly-changing domains rightly receive more attention that stable ones), and
  • overlap—if not competition—between higher-education and vendor initiatives in the Enterprise space.

Change

Enterprise IT is changing thematically, driven by mobility, cloud, and other fundamental changes in information technology. It also is changing specifically, as concrete challenges arise.

Consider, as one way to approach the former, these five thematic metamorphoses:

  • In systems and applications, maintenance is giving way to renewal. At one time colleges and universities developed their own administrative systems, equipped their own data centers, and deployed their own networks. In-house development has given way to outside products and services installed and managed on campus, and more recently to the same products and services delivered in or from the cloud.
  • In procurement and deployment, direct administration and operations are giving way to negotiation with outside providers and oversight of the resulting services. Whereas once IT staff needed to have intricate knowledge of how systems worked, today that can be less useful that effective negotiation, monitoring, and mediation.
  • In data stewardship and archiving, segregated data and systems are giving way to integrated warehouses and tools. Historical data used to remain within administrative systems. The cost of keeping them “live” became too high, and so they moved to cheaper, less flexible, and even more compartmentalized media. The plunging price of storage and the emergence of sophisticated data warehouses and business-intelligence systems reversed this. Over time, storage-based barriers to data integration have gradually fallen.
  • In management support, unidimensional reporting is giving way to multivariate analytics. Where once summary statistics emerged separately from different business domains, and drawing inferences about their interconnections required administrative experience and intuition, today connections can be made at the record level deep within integrated data warehouses. Speculating about relationships between trends is giving way to exploring the implications of documented correlations.
  • In user support, authority is giving way to persuasion. Where once users had to accept institutional choices if they wanted IT support, today they choose their own devices, expect campus IT organizations to support them, and bypass central systems if support is not forthcoming. To maintain the security and integrity of core systems, IT staff can no longer simply require that users behave appropriately; rather, they must persuade users to do so. This means that IT staff increasingly become advocates rather than controllers. The required skillsets, processes, and administrative structures have been changing accordingly.

Beyond these broad thematic changes, a fourfold confluence is about to accelerate change in Enterprise IT: major systems approaching end-of-life, the growing importance of analytics, extensive mobility supported by third parties, and the availability of affordable, capable cloud-based infrastructure, services, and applications.

Systems Approaching End-of-Life

In the mid-1990s, many colleges and universities invested heavily in administrative-systems suites, often (if inaccurately) called “Enterprise Reporting and Planning” systems or “ERP.” Here, again drawing on CDS, are implementation data on Student, Finance, and HR/Payroll systems for non-specialized colleges and universities:

The pattern of implementation varies slightly across institution types. Here, for example, are implementation dates for Finance systems across four broad college and university groups:

Although these systems have generally been updated regularly since they were implemented, they are approaching the end of their functional life. That is, although they technically can operate into the future, the functionality of turn-of-the-century administrative systems likely falls short of what institutions currently require. Such functional obsolescence typically happens after about 20 years.

The general point holds across higher education: A great many administrative systems will reach their 20-year anniversaries over the next several years.

Moreover, many commercial administrative-systems providers end support for older products, even if those products have been maintained and updated. This typically happens as new products with different functionality and/or architecture establish themselves in the market.

These two milestones—functional obsolescence and loss of vendor support—mean that many institutions will be considering restructuring or replacement of their core administrative systems over the next few years. This, in turn, means that administrative-systems stability will give way to 1990s-style uncertainty and change.

Growing Importance of Analytics

Partly as a result of mid-1990s systems replacements, institutions have accumulated extensive historical data from their operations. They have complemented and integrated these by implementing flexible data-warehousing and business-intelligence systems.

Over the past decade, the increasing availability of sophisticated data-mining tools has given new purpose to data warehouses and business-intelligence systems that have until now have largely provided simple reports. This has laid foundation for the explosive growth of analytic management approaches (if, for the present, more rhetorical than real) in colleges and universities, and in the state and federal agencies that fund and/or regulate them.

As analytics become prominent in areas ranging from administrative planning to student feedback, administrative systems need to become better integrated across organizational units and data sources. The resulting datasets need to become much more widely accessible while complying with privacy requirements. Neither of these is easy to achieve. Achieving them together is more difficult still.

Mobility Supported by Third Parties

Until about five years ago campus communications—infrastructure and services both—were largely provided and controlled by institutions. This is no longer the case.

Much networking has moved from campus-provided wired and WiFi facilities to cellular and other connectivity provided by third parties, largely because those third parties also provide the mobile end-user devices students, faculty, and staff favor.

Separately, campus-provided email and collaboration systems have given way to “free” third-party email, productivity, and social-media services funded by advertising rather than institutional revenue. That mobile devices and their networking are largely outside campus control is triggering fundamental rethinking of instruction, assessment, identity, access, and security processes. This rethinking, in turn, is triggering re-engineering of core systems.

Affordable, Capable Cloud

Colleges and universities have long owned and managed IT themselves, based on two assumptions: that campus infrastructure needs are so idiosyncratic that they can only be satisfied internally, and that campuses are more sophisticated technologically than other organizations.

Both assumptions held well into the 1990s. That has changed. “Outside” technology has caught up to and surpassed campus technology, and campuses have gradually recognized and begun to avoid the costs of idiosyncrasy.

As a result, outside services ranging from commercially hosted applications to cloud infrastructure are rapidly supplanting campus-hosted services. This has profound implications for IT staffing—both levels and skillsets.

The upshot is that Enterprise, already the largest component of higher-education IT, is entering a period of dramatic change.

Beyond change in IT, the academy itself is evolving dramatically. For example, online enrollment is becoming increasingly common. As the Sloan Foundation reports, the fraction of students taking some or all of their coursework online is increasing steadily:

This has implications not only for pedagogy and learning environments, but also for the infrastructure and applications necessary to serve remote and mobile students.

Changes in the IT and academic enterprises are one reason Enterprise IT needs more attention. A second is the panoply of entities that try to influence Enterprise IT.

Overlap

One might expect colleges and universities to have relatively consistent requirements for administrative systems, and therefore that the market for those would consist largely of a few major widely-used products. The facts are otherwise. Here are data from the recent EDUCAUSE Center for Applied Research (ECAR) research report The 2011 Enterprise Application Market in Higher Education:

The closest we come to a compact market is for learning management systems, where 94% of installed systems come from the top 5 vendors. Even in this area, however, there are 24 vendors and open-source groups. At the other extreme is web content management, where 89 active companies and groups compete and the top providers account for just over a third of the market.

One way major vendors compete under circumstances like these is by seeking entrée into the informal networks through which institutions share information and experiences. They do this, in many cases, by inviting campus CIOs or administrative-systems heads to join advisory groups or participate in vendor-sponsored conferences.

That these groups are usually more about promoting product than seeking strategic or technical advice is clear. They are typically hosted and managed by corporate marketing groups, not technical groups. In some cases the advisory groups comprise only a few members, in some cases they are quite large, and in a few cases there are various advisory tiers. CIOs from large colleges and universities are often invited to various such groups. For the most part these groups have very little effect on vendor marketing, and even less on technical architecture and direction.

So why do CIOs attend corporate advisory board meetings? The value to CIOs, aside from getting to know marketing heads, is that these groups’ meetings provide a venue for engaging enterprise issues with peers. The problem is that the number of meetings and their oddly overlapping memberships lead to scattershot conversations inevitably colored by the hosts’ marketing goals and technical choices. It is neither efficient nor effective for higher education to let vendors control discussions of Enterprise IT.

Before corporate advisory bodies became so prevalent, there were groups within higher-education IT that focused on Enterprise IT and especially on administrative systems and network infrastructure. Starting with 1950s workshops on the use of punch cards in higher education, CUMREC hosted meetings and publications focused on the business use of information technology. CAUSE emerged from CUMREC in the late 1960s, and remained focused on administrative systems. EDUCOM came into existence in the mid-1960s, and its focus evolved to complement those of CAUSE and CUMREC by addressing joint procurement, networking, academic technologies, copyright, and in general taking a broad, inclusive approach to IT. Within EDUCOM, the Net@EDU initiative focused on networking much the way CUMREC focused on business systems.

As these various groups melded into a few larger entities, especially EDUCAUSE, Enterprise IT remained a focus, but it was only one of many. Especially as the y2k challenge prompted increased attention to administrative systems and intensive communications demands prompted major investments in networking, the prominence of Enterprise IT issues in collective work diffused further. Internet2 became the focal point for networking engagements, and corporate advisory groups became the focal point for administrative-systems engagements. More recently, entities such as Gartner, the Chronicle of Higher Education, and edu1world have tried to become influential in the Enterprise IT space.

The results of the overlap among vendor groups and associations, unfortunately, are scattershot attention and dissipated energy in the higher-education Enterprise IT space. Neither serves higher education well. Overlap thus joins accelerated change as a major argument for refocusing and reenergizing Enterprise IT.

The Importance of Enterprise IT

Enterprise IT, through its emphasis on core institutional activities, is central to the success of higher education. Yet the community’s work in the domain has yet to coalesce into an effective whole. Perhaps this is because we have been extremely respectful of divergent traditions, communities, and past achievements.

We must not be disrespectful, but it is time to change this: to focus explicitly on what Enterprise IT needs in order to continue advancing higher education, to recognize its strategic importance, and to restore its prominence.

9/25/12 gj-a  

The Rock, and The Hard Place

Looking into the near-term future—say, between now and 2020—we in higher education have to address two big challenges, both involving IT. Neither admits easy progress. But if we don’t address them, we’ll find ourselves caught between a rock and a hard place.

  • The first challenge, the rock, is to deliver high-quality, effective e-learning and curriculum at scale. We know how to do part of that, but key pieces are missing, and it’s not clear how will find them.
  • The second challenge, the hard place, is to recognize that enterprise cloud services and personal devices will make campus-based IT operations the last rather than the first resort. This means everything about our IT base, from infrastructure through support, will be changing just as we need to rely on it.

“But wait,” I can hear my generation of IT leaders (and maybe the next) say, “aren’t we already meeting those challenges?”

If we compare today’s e-learning and enterprise IT with that of the recent past, those leaders might rightly suggest, immense change is evident:

  • Learning management systems, electronic reserves, video jukeboxes, collaboration environments, streamed and recorded video lectures, online tutors—none were common even in 2000, and they’re commonplace today.
  • Commercial administrative systems, virtualized servers, corporate-style email, web front ends—ditto.

That’s progress and achievement we all recognize, applaud, and celebrate. But that progress and achievement overcame past challenges. We can’t rest on our laurels.

We’re not yet meeting the two broad future challenges, I believe, because in each case fundamental and hard-to-predict change lies ahead. The progress we’ve made so far, however progressive and effective, won’t steer us between the rock of e-learning and the hard place of enterprise IT.

The fundamental change that lies ahead for e-learning
is the the transition from campus-based to distance education

Back in the 1990s, Cliff Adelman, then at the US Department of Education, did a pioneering study of student “swirl,” that is, students moving through several institutions, perhaps with work intervals along the way,before earning degrees.

“The proportion of undergraduate students attending more than one institution,” he wrote, “swelled from 40 percent to 54 percent … during the 1970s and 1980s, with even more dramatic increases in the proportion of students attending more than two institutions.” Adelman predicted that “…we will easily surpass a 60 percent multi-institutional attendance rate by the year 2000.”

Moving from campus to campus for classes is one step; taking classes at home is the next. And so distance education, long constrained by the slow pace and awkward pedagogy of correspondence courses, has come into its own. At first it was relegated to “nontraditional” or “experimental” institutions—Empire State College, Western Governors University, UNext/Cardean (a cautionary tale for another day), Kaplan. Then it went mainstream.

At first this didn’t work: fathom.com, for example, a collaboration among several first-tier research universities led by Columbia, found no market for its high-quality online offerings. (Its Executive Director has just written a thoughtful essay on MOOCs, drawing on her fathom.com experience.)

Today, though, a great many traditional colleges and universities successfully bring instruction and degree programs to distant students. Within the recent past these traditional institutions have expanded into non-degree efforts like OpenCourseWare and to broadcast efforts like the MOOC-based Coursera and edX. In 2008, 3.7% of students took all their coursework through distance education, and 20.4% took at least one class that way.

Learning management systems, electronic reserves, video jukeboxes, collaboration environments, streamed and recorded video lectures, online tutors, the innovations that helped us overcome past challenges—little of that progress was designed for swirling students who do not set foot on campus.

We know how to deliver effective instruction to motivated students at a distance. Among policy issues we have yet to resolve, we don’t yet know how to

  • confirm their identity,
  • assess their readiness,
  • guide their progress,
  • measure their achievement,
  • standardize course content,
  • construct and validate curriculum across diverse campuses, or
  • certify degree attainment

in this imminent world. Those aren’t just IT problems, of course. But solving them will almost certainly challenge IT.

The fundamental change that lies ahead for enterprise technologies
is the transition from campus IT to cloud and personal IT

The locus of control over all three principal elements of campus IT—servers and services, networks, and end-user devices and applications—is shifting rapidly from the institution to customers and third parties.

As recently as ten years ago, most campus IT services, everything from administrative systems through messaging and telephone systems to research technologies, were provided by campus entities using campus-based facilities, sometimes centralized and sometimes not. The same was true for the wired and then wireless networks that provided access to services, and for the desktop and laptop computers faculty, students, and staff used.

Today shared services are migrating rapidly to servers and systems that reside physically and organizationally elsewhere—the “cloud”—and the same is happening for dedicated services such as research computing. It’s also happening for networks, as carrier-provided cellular technologies compete with campus-provided wired and WiFi networking, and for end-user devices, as highly mobile personal tablets and phones supplant desktop and laptop computers.

As I wrote in an earlier post about “Enterprise IT,” the scale of enterprise infrastructure and services within IT and the shift in their locus of control have major implications for and the organizations that have provided it. Campus IT organizations grew up around locally-designed services running on campus-owned equipment managed by internal staff. Organization, staffing, and even funding models ensued accordingly. Even in academic computing and user support, “heavy metal” experience was valued highly. The shifting locus of control makes other skills at least as valuable: the ability to negotiate with suppliers, to engage effectively with customers (indeed, to think of them as “customers” rather than “users”), to manage spending and investments under constraint, to explain.

To be sure, IT organizations still require highly skilled technical staff, for example to fine-tune high-performance computing and networking, to ensure that information is kept secure, to integrate systems efficiently, and to identify and authenticate individuals remotely. But these technologies differ greatly from traditional heavy metal, and so must enterprise IT.

The rock, IT, and the hard place

In the long run, it seems to me that the campus IT organization must evolve rapidly to center on seven core activities.

Two of those are substantive:

  • making sure that researchers have the technologies they need, and
  • making sure that teaching and learning benefit from the best thinking about IT applications and effectiveness.

Four others are more general:

  • negotiating and overseeing relationships with outside providers;
  • specifying or doing what is necessary for robust integration among outside and internal services;
  • striking the right personal/institutional balance between security and privacy for networks, systems, and data; and last but not least
  • providing support to customers (both individuals and partner entities).

The seventh core activity, which should diminish over time, is

  • operating and supporting legacy systems.

Creative, energetic, competent staff are sine qua non for achieving that kind of forward-looking organization. It’s very hard to do good IT without good, dedicated people, and those are increasingly difficult to find and keep. Not least, this is because colleges and universities compete poorly with the stock options, pay, glitz, and technology the private sector can offer. Therein lies another challenge: promoting loyalty and high morale among staff who know they could be making more elsewhere.

To the extent the rock of e-learning and the hard place of enterprise IT frame our future, we not only need to rethink our organizations and what they do; we also need to rethink how we prepare, promote, and choose leaders for higher-education leaders on campus and elsewhere—the topic, fortuitously, of a recent ECAR report, and of widespread rethinking within EDUCAUSE.

We’ve been through this before, and risen to the challenge.

  • Starting around 1980, minicomputers and then personal computers brought IT out of the data center and into every corner of higher education, changing data center, IT organization, and campus in ways we could not even imagine.
  • Then in the 1990s campus, regional, and national networks connected everything, with similarly widespread consequences.

We can rise to the challenges again, too, but only if we understand their timing and the transformative implications.

Transforming Higher Education through Learning Technology: Millinocket?

Down East

Note to prospective readers: This post has evolved, through extensive revision and expansion and more careful citation, into a paper available at http://gjackson.us/it-he.pdf.

You might want to read that paper, which is much better and complete, instead of this post — unless you like the pictures here, which for the moment aren’t in the paper. Even if you read this to see the pictures, please go read the other.

“Which way to Millinocket?,” a traveler asks. “Well, you can go west to the next intersection…” the drawling down-east Mainer replies in the Dodge and Bryan story,

“…get onto the turnpike, go north through the toll gate at Augusta, ’til you come to that intersection…. well, no. You keep right on this tar road; it changes to dirt now and again. Just keep the river on your left. You’ll come to a crossroads and… let me see. Then again, you can take that scenic coastal route that the tourists use. And after you get to Bucksport… well, let me see now. Millinocket. Come to think of it, you can’t get there from here.”

PLATO and its programmed-instruction kin were supposed to transform higher education. So were the Apple II, and then the personal computer – PC and then Mac – and then the “3M” workstation (megapixel display, megabyte memory, megaflop speed) for which Project Athena was designed. So were simulated laboratories, so were BITNET and then the Internet, so were MUDs, so was Internet2, so was artificial intelligence, so was supercomputing.

Each of these most certainly has helped higher education grow, evolve, and gain efficiency and flexibility. But at its core, higher education remains very much unchanged. That may no longer suffice.

What about today’s technological changes and initiatives – social media, streaming video, multi-user virtual environments, mobile devices, the cloud? Are they to be evolutionary, or transformational? If higher education needs the latter, can we get there from here?

It’s important to start conversations about questions like these from a common understanding of information technologies that currently play a role in higher education, what that role is, and how technologies and their roles are progressing. That’s what prompted these musings.

Information Technology

For the most part, “information technology” means a tripartite array of hardware and software:

  • end-user devices, which today range from large desktop workstations to small mobile phones, typically with some kind of display, some way to make choices and enter text, and various other capabilities variously enabled by hardware and software;
  • servers, which comprise not just racks of processors, storage, and other hardware but rather are aggregations of hardware, software, applications, and data that provide services to multiple users (when the aggregation is elsewhere, it’s often called “the cloud” today); and
  • networks, wireless or wired, which interlink local servers, remote server clouds, and end-user devices, and which typically comprise copper and glass cabling, routers and switches and optronics, and network operating system plus some authentication and logging capability.

Information technology tends to progress rapidly but unevenly, with progress or shortcomings in one domain driving or retarding progress in others.

Today, for example, the rapidly growing capability of small smartphones has taxed previously underused cellular networks. Earlier, excess capability in the wired Internet prompted innovation in major services like Google and YouTube. The success of Google and Amazon forced innovation in the design, management, and physical location of servers.

Perhaps the most striking aspects of technological progress have been its convergence and integration. Whereas once one could reasonably think separately about servers, networks, and end-user devices, today the three are not only tightly interconnected and interdependent, but increasingly their components are indistinguishable. Network switches are essentially servers, servers often comprise vast arrays of the same processors that drive end-user devices plus internal networks, and end-user devices readily tackle tasks – voice recognition, for example – that once required massive servers.

Access to Information Technology

Progress, convergence, and integration in information technology have driven dramatic and fundamental change in the information technologies faculty, students, colleges, and universities have. That progress is likely to continue.

Here, as a result, are some assumptions we can reasonably make today:

  • Households have some level of broadband access to the Internet, and at least one computer capable of using that broadband access to view and interact with Web pages, handle email and other messaging, listen to audio, and view videos of at least YouTube quality .
  • Teenagers and most adults have some kind of mobile phone, and that phone usually has the capability to handle routine Internet tasks like viewing Web pages and reading email.
  • Colleges and universities have building and campus networks operating at broadband speeds of at least 10Mb/sec, and most have wireless networks operating at 802.11b (11Mb/sec) or greater speed.
  • Server capacity has become quite inexpensive, largely because “cloud” providers have figured out how to gain and then sell economy of scale.
  • Everyone – or at least everyone between the ages of, say, 12 and 65 – has at least one authenticated online identity, including email and other online service accounts; Facebook, Twitter, Google, or other social-media accounts; online banking, financial, or credit-card access; or network credentials from a school, college or university, or employer.
  • Everyone knows how to search on the Internet for material using Google, Bing, or other search engines.
  • Most people have a digital camera, perhaps integrated into their phone and capable of both still photos and videos, and they know how to send them to others or offload their photos onto their computers or an online service.
  • Most college and university course materials are in electronic form, and so is a large fraction of library and reference material used by the typical student.
  • Most colleges and universities have readily available facilities for creating video from lectures and similarly didactic events, whether in classrooms or in other venues, and for streaming or otherwise making that video available online.

It’s striking how many of these assumptions were invalid even as recently as five years ago. Most of the assumptions were invalid a decade before that (and it’s sobering to remember that the “3M” workstation was a lofty goal as recently as 1980 and cost nearly $10,000 in the mid-1980s, yet today’s iPhone almost exceeds the 3M spec).

Looking a bit into the future, here are some further assumptions that probably will be safe:

  • Typical home networking and computers will have improved to the point they can handle streamed video and simple two-way video interactions (which means that at least one home computer will have an add-on or built-in camera).
  • Most people will know how to communicate with individuals or small groups online through synchronous social media or messaging environments, in many cases involving video.
  • Authentication and monitoring technologies will exist to enable colleges and universities to reasonably ensure that their testing and assessment of student progress is protected from fraud.
  • Pretty much everyone will have the devices and accounts necessary for ubiquitous connectivity with anybody else and to use services from almost any college, university, or other educational provider.

Technology, Teaching, and Learning

In colleges and universities, as in other organizations, information technology can promote progress by enabling administrative processes to become more efficient and by creating diverse, flexible pathways for communication and collaboration within and across different entities. That’s organizational technology, and although it’s very important, it affects higher education much the way it affects other organizations of comparable size.

Somewhat more distinctively, information technology can become learning technology, an integral part of the teaching and learning process. Learning technology sometimes replaces traditional pedagogies and learning environments, but more often it enhances and expands them.

The basic technology and middleware infrastructure necessary to enable colleges and universities to reach, teach, and assess students appears to exist already, or will before long. This brings us to the next question: What applications turn information technology into learning technology?

To answer this, it’s useful to think about four overlapping functions of learning technology.

Amplify and Extend Traditional Pedagogies, Mechanisms, and Resources

For example, by storing and distributing materials electronically, by enabling lectures and other events to be streamed or recorded, and by providing a medium for one-to-one or collective interactions among faculty and students, IT potentially expedites and extends traditional roles and transactions. Similarly, search engines and network-accessible library and reference materials vastly increase faculty and students access. The effect, although profound, nevertheless falls short of transformational. Chairs outside faculty doors give way to “learning management systems” like Blackboard or Sakai or Moodle, wearing one’s PJs to 8am lectures gives way to watching lectures from one’s room over breakfast, and library schools become information-science schools. But the enterprise remains recognizable. Even when these mechanisms go a step further, enabling true distance education whereby students never set foot on campus (in 2011, 3.7% of all students took all their coursework through distance education), the resulting services remain recognizable. Indeed, they are often simply extensions of existing institutions’ campus programs.

Make Educational Events and Materials Available Outside the Original Context

For example, the Open Courseware initiative (OCW) started as publicly accessible repository of lecture notes, problem sets, and other material from MIT classes. It since has grown to include similar material from scores of other institutions worldwide. Similarly, the newer Khan Academy has collected a broad array of instructional videos on diverse topics, some from classes and some prepared especially for Khan, and made those available for anyone interested in learning the material. OCW, Khan, and initiatives like them provide instructional material in pure form, rather than as part of curricula or degree programs.

Enable Experience-Based Learning 

This most productively involves experience that otherwise might have been unaffordable, dangerous, or otherwise infeasible. Simulated chemistry laboratories and factories were an early example – students could learn to synthesize acetylene by trial and error without blowing up the laboratory, or to fine-tune just-in-time production processes without bankrupting real manufacturers. As computers have become more powerful, so have simulations become more complex and realistic. As simulations have moved to cloud-based servers, multi-user virtual environments have emerged, which go beyond simulation to replicate complex environments. Experiences like these were impossible to provide before the advent of powerful, inexpensive server clouds, ubiquitous networking, and graphically capable end-user devices.

Replace the Didactic Classroom Experience

This is the most controversial application of learning technology – “Why do we need faculty to teach calculus on thousands of different campuses, when it can be taught online by a computer?” – but also one that drives most discussion of how technology might transform higher education. It has emerged especially for disciplines and topics where instructors convey what they know to students through classroom lectures, readings, and tutorials. PLATO (Programmed Logic for Automated Teaching Operations) emerged from the University of Illinois in the 1960s as the first major example of computers replacing teachers, and has been followed by myriad attempts, some more successful than others, to create technology-based teaching mechanisms that tailor their instruction to how quickly students master material. (PLATO’s other major innovation was partnership with a commercial vendor, the now defunct Control Data Corporation.)

Higher Education

We now come to the $64 question: what role might trends in higher-education learning technology play in the potential transformation of higher education?

The transformational goal for higher education is to carry out its social and economic roles with greater efficiency and within the resource constraints. Many believe that such transformation requires a very different structure for future higher education. What might that structure be, and what role might information technologies play in its development?

The fundamental purpose of higher education is to advance society, polity, and the economy by increasing the social, political, and economic skills and knowledge of students – what economists call “human capital“. At the postsecondary level, education potentially augments students’ human capital four ways:

  • admission, which is to say declaring that a student has been chosen as somehow better qualified or more adaptable in some sense than other prospective students (this is part of Lester Thurow‘s “job queue” idea);
  • instruction, including core and disciplinary curricula, the essentially unidirectional transmission of concrete knowledge through lectures, readings, and like, and also the explication and amplification of that through classroom, tutorial, and extracurricular guidance and discussion (this is what we often mean by the narrow term “teaching”);
  • certification, specifically the measuring of knowledge and skill through testing and other forms of assessment; and
  • socialization, specifically learning how to become an effective member of society independently of one’s origin family, through interaction with faculty and especially with other students.

Sometimes a student gets all four together. For example, MIT marked me even before I enrolled as someone likely to play a role in technology (admission), taught me a great deal about science and engineering generally, electrical engineering in particular, and their social and economic context (instruction), documented through grades based on exams, lab work, and classroom participation that I had mastered (or failed to master) what I’d been taught (certification), and immersed me in an environment wherein data-based argument and rhetoric guided and advanced organizational life, and thereby helped me understand how to work effectively within organizations, groups, and society (socialization).

Most students attend college whose admissions processes amount to open admission, or involve simple norms rather than competition.  That is, anyone who meets certain standards, such as high-school completion with a given GPA or test score, is admitted. In 2010, almost half of all institutions reporting having no admissions criteria, and barely 11% accepted fewer than 1/4 of their applicants. Moreover, most students do not live on campus — in 2007-08, only 14% of undergraduates lived in college-owned housing. This means that most of higher education has limited admission and socialization effects. Therefore, for the most part higher education affects human capital through instruction and certification.

Instruction is an especially fertile domain for technological progress. This is because three trends converge around it:

  • ubiquitous connectivity, especially from students’ homes;
  • the rapidly growing corpus of coursework offered online, either as formal credit-bearing classes or as freestanding materials from entities like OCW or Khan; and
  • perhaps more speculative) the growing willingness of institutions to grant credit and allow students to satisfy requirements through classes taken at other institutions or through some kind of testing or assessment.

Indeed, we can imagine a future where it becomes commonplace for students to satisfy one institution’s degree requirements with coursework from many other institutions. Further down this road, we can imagine there might be institutions that admit students, prescribe curriculum, certify progress, and grant degrees – but have no instructional faculty and do not offer courses. This, in turn, might spawn purely instructional institutions.

One problem with such a future is that socialization, a key function of higher education, gets lost. This points the way to one major technology challenge for the future: Developing online mechanisms, for students who are scattered across the nation or the world, that provide something akin to rich classroom and campus interaction. Such interaction is central to the success of, for example, elite liberal-arts colleges and major residential universities. Many advocates of distance education believe that social media such as Facebook groups can provide this socialization, but that potential has yet to be realized.

A second problem with such a future is that robust, flexible methods for assessing student learning at a distance remain either expensive or insufficient. For example, ProctorU and Kryterion are two of several commercial entities that provide remote exam proctoring, but they do so through somewhat intensive use of video observation, and that only works for rather traditional written exams. For another example, in the aftermath of 9/11 many universities figured out how to conduct doctoral thesis defenses using high-bandwidth videoconferencing facilities rather than flying in faculty from other institutions, but this simply reduced travel expense rather than changed the basic idea that several faculty members would examine one student at a time.

Millinocket

If learning technologies are to transform higher education, we must exploit opportunities and address problems. At the same time, transformed higher education cannot neglect important dimensions of human capital. In that respect, our goal should be not only to make higher education more efficient than it is today, but also better.

Drivers headed for Millinocket rarely pull over any more to ask directions of drawling downeasters. Instead, they rely on the geographic position and information systems built into their cars or phones or computers, which in turn rely on network connectivity to keep maps and traffic reports up to date. To be sure, reliance on GPS and GIS tends to insulate drivers from interaction with the diversity they pass along the road, much as Interstate highways standardized cross-country travel. So the gain from those applications is not without cost.

The same is true for learning technology: it will yield both gains and losses. Effective progress will result only if we explore and understand the technologies and their applications, decide how these relate to the structure and goals of higher education, identify obstacles and remedies, and figure out how to get there from here.

Working Together Online: Are We There Yet?

Most of you reading this are probably too young to have seen Seven Days in May, in which Burt Lancaster, as General James Mattoon Scott, helps break up a military coup attempt within the United States. Good conspiracy story, so-so flick, old helicopters, and it harps on a weird bit of technology: every time Scott talks with his counterparts, they fire up big, old, black-and-white console TVs in their offices so they can see each other’s heads as they talk.

Let’s grant that seeing and talking is better than just talking. But the technology to do that can be confining, in that it requires much more equipment, networking, configuration, and support than a simple phone call. The difference is even greater if the phone call is standard but the video connection isn’t. The striking thing about the use of videoconferencing in Seven Days in May is that it appears to add almost no value to communication while making it much more complicated and constraining. (If the movie had been made a year later, it might have used Picturephones instead of TVs, but they weren’t much better.)

My question for today is how the balance between value and cost works out for collaboration and communication technologies we commonly consider today, putting aside one-to-one interactions as a separate case that’s pretty well understood. It seems to me that six mechanisms dominate when we work together at a distance:

  1. voice calls, perhaps augmented with material viewable online,
  2. text-based sharing and collaboration (for example,  listservs and wikis),
  3. personal video “calls” (for example, using Skype, Google Video Chat, Oovoo, Vidyo, and similar services),
  4. online presentations combined with text-based chat-like response (for example, Adobe Connect or Cisco WebEx with broadcast audio),
  5. voice calls combined with multi-user tools for synchronous editing or whiteboarding (for example, using Google Docs, Office Live, a wiki, or a group whiteboarding tool during a conference call), and
  6. large-screen video facilities (such as Cisco’s or Polycom’s hardware and services installed in well-designed facilities).

This is by no means a complete set of mechanisms. But I think it spans the space and helps distinguish where we have good value propositions and where we have work to do. (In addition to one-to-one mechanisms, I’ve ignored one-way webcasts with no audience-response mechanisms, since I think we understand those  pretty well too.)

Different mechanisms entail different costs (meaning both expense and difficulty). Their effectiveness varies depending on how they’re used. Their uses vary along two dimensions: the purpose of the interaction (presentation, communication, or collaboration) and the situation (one-to-many, few-to-few, or many-to-many — with the boundary between “few” and “many” being the familiar 7±2).

I think the nine logical combinations reduce to five interesting use cases: one-to-many presentations, few-to-few communication or collaboration, and many-to-many communication or collaboration.

I’ve found it useful to collapse all that into a matrix, with rows corresponding to the five use cases, columns corresponding to the six mechanisms, and ratings in each cell of effectiveness and cost. Here’s how my matrix turns out, coding “costs” as $ to $$$ and “effectiveness” as A=Excellent down through Good and Meh to D=Poor:



If my ratings are reasonable, what does this tell us? Let’s explore that by working through the columns.

  • Voice calls, as we know all too well, are inexpensive, but they don’t work well for large groups trying to communicate or collaborate. They work somewhat for one-to-many presentations, but their most effective use is for communications among small groups.
  • Text sharing by itself never rises above Good, but it’s better than simple voice calls for collaboration and for many-to-many communication. It’s very hard to have a conversation among more than about seven people on the phone, but it’s quite possible for much larger numbers to hold text-based online “conversations”.
  • Personal video, whose costs are higher than voice calls and text sharing because it typically requires cameras, better networking, a certain amount of extra setup, and sometimes a service subscription) doesn’t work very well beyond few-to-few situations. It’s better than phone calls for few-to-few collaboration, I think, because being able to see faces (which is pretty much all one can see) seems to help groups reach consensus more easily. Although it costs more than voice calls, in my experience it adds little value to presentations or few-to-few communications.
  • Presentation technologies that combine display and audio broadcast with some kind of text-response capability are very widely used. In my view, they’re the best technology for one-to-many presentations. They’re less useful for few-to-few interactions, largely because in those situations voice interactions are feasible and are much richer than the typical chat box. I rate their usefulness for many-to-many collaboration similarly, but rate it lower than text sharing for this use because the typical chat mechanisms within these technologies cope poorly with large volumes of comments from lots of participants. Text-sharing mechanisms, which usually have search, threading, and archiving capabilities, cope much better with volume.
  • Voice calling combined with synchronously editable documents or whiteboards is turning out to be very useful, I think, in that it combines the richness of conversation with the visual coherence of a document or whiteboard. This makes it especially effective for few-to-few situations, and even for one-to-many presentations — although it can’t cope if too many people try modifying on a document or whiteboard at the same time (in that case, more structured technologies like IdeaSpace are useful, albeit much less flexible).
  • Finally, although I’ve spent a great deal of time in online presentation, communication, and collaboration using specialized videoconferencing facilities, I’ve come to believe that they are most effective only for few-to-few communications. They’re reasonable for few-to-few collaboration, but this use case usually produces some push-and-pull between looking at other participants and working together on documents or whiteboards. They’re not very effective for presentations or many-to-many interactions because except in rare cases there are capacity limitations (although interesting counterexamples are emerging, such as some classrooms at Duke’s Fuqua business school).

What might we infer from all this?

  • First, it’s striking that some simple, inexpensive technologies remain the best or most cost-effective way to handle certain use cases. Although it’s unsurprising, this is sometimes hard to remember amidst the pressure to keep up with the technologically advanced Joneses.
  • Second, it’s been interesting to see unexpected combinations of technologies such as jointly editing documents during conference calls become popular and effective even in the absence of marketing — I’ve never seen a formal ad or recommendation for that, even as its use proliferates.
  • Third, and as unsurprising as the first two, it’s clear that good solutions to the many-to-many challenge remain elusive. Phone calls, personal video, and video facilities all fail in this situation, regardless of purpose. Hybrid and text-based tools don’t do much better. If one wants a large group to communicate effectively within itself other than by one-to-many presentations, there’s no good way to achieve that technologically. As our organizations become ever more distributed geographically and travel becomes ever more difficult and expensive, the need for many-to-many technologies is going to increase.

Of course the technologies I’ve chosen may not be the right set, there may be other important use cases, and my ratings may not be accurate. But going through the classification and rating exercise helped clarify some concerns I’d been unable to frame. I encourage others to explore their views in similar ways, and perhaps we’ll learn something by comparing notes.