Susan S. Silbey
In anticipation of the March 8, 2019 MacVicar Day symposium, I was asked to think about "What is important to a 21st century undergraduate education and what should MIT do about it." I answered more briefly along the following lines.
A 21st century undergraduate education should be quite the same as an excellent 20th or even 19th century education: a simultaneously broad and deep education, exploring across subjects and burying deep in a few. At its core, excellent education is about learning how to learn – more about developing habits of mind, more about both disciplined and imaginative inquiry than about particular substantive information, theories, or methodological techniques. Ultimately, education should destabilize taken-for-granted ways of seeing to provide multiple lenses with which to encounter the world.
We could, of course, talk about education even more boldly, as MIT likes to say, "To make a better world." But, I am not persuaded that that is the best message for the students, even if it appears to be popular with generous philanthropists. Claiming such a bold agenda seems counter to the sort of humility that I feel prompts the deepest sorts of learning and growth in our students, ourselves, and the institution as a whole.
To be sure, there is a risk of being too narrow in our ambitions. Personally, I wish we would push back also against the temptation to turn undergraduate education into professional or occupational training. Such an agenda is not even practical, for there is abundant evidence showing that there is not much of a connection between a student's college curriculum and her eventual career.
Ten to 15 years after graduation most students will not be doing what they studied in college, whether they were engineering or history majors. (Those who lead the way in an occupational bubble may be rewarded for having picked – and stayed in – the proverbially "right" field, but when the bubble bursts – as it always does once the field is saturated – those who instrumentally chose a learning path to follow the herd will not experience the career benefits, while the general lack of correlation between college training and eventual career persists across the longer time spans.)
Further in this vein, I wish we would also push back against the impulse to turn all topics and subjects into "problem solving," as if life were a series of tests demanding that we produce the right or efficient answer. When it comes down to it, I do believe we "make a better world" through an MIT education, but simply because the best reason for getting an education at MIT or anywhere else is that education is a valuable end in itself, not just for the careers it enables or the immediate problems it solves. It is better to be educated rather than not to be, not simply because income and life chances are higher for those with a college education (although that is true) but because education recasts human beings' ways of being in the world and that, in and of itself, has transformative potential. An excellent education creates new instincts in the individual; a habit of looking for new meanings; of questioning comfortable thoughts; of being able to see multiple points of view at the same time; of perpetually playing with and fighting about the meanings we assign to events and texts and phenomena so that we can understand them more deeply and in their full complexity. It is about making each of life's experiences slower – as those events are apprehended as more layered and multidimensional, their contexts and consequences more fully appreciated. An excellent education creates citizens who experience the enduring quality of the present while recognizing in it the legacies of the past. This is not a new vision of education but a very old one.
A Fundamental Education
Unfortunately, the commitment to fundamental education (even in science and engineering) is being challenged by market pressures that encourage students to see the world through one set of values and meanings to the exclusion of others. If we are not careful, students become conditioned to value and pursue only that which the current market values and pursues (disruptive innovations, profit) more than truth, critical thinking, empathy for differences, and learning how to learn. Across the nation, there are predictions about the demise of the humanities precisely because of this. Colleges and universities are closing departments to pursue more training – few seem to say education – in computation and algorithmic reasoning. People call this the consequences of computing and the digital transformation of everyday life; or, is it instead the consequence of us losing focus on the true meaning and value of a good education?
Fortunately, at last month's celebration of the founding of the MIT Schwarzman College of Computing, I heard something hopeful. I heard repeated calls for more humanistic education, for greater understanding of social processes, and moral challenges. I heard the same at the 2019 MacVicar celebration too.
This may be MIT's moment in history. I have also heard this from colleagues here and across the nation. As part of the deliberations on the possible shapes of the College, we have been reaching out widely. Although we are behind some other universities that began such adventures years before MIT and that are further along in developing new curriculum, research collaborations, and organizational units, I am told that whatever MIT does, we will be watched carefully and taken as a beacon and a benchmark. This is quite a challenge.
A beacon and a benchmark can be a heavy burden and special responsibility. Such ambition feeds persistent worries I harbor about MIT's own transformation over the last 20+ years from a modest institution that at times did extraordinary things, to an institution that regards itself (and is apparently regarded by others) as extraordinary.
It is why I sometimes worry about the bold claim "to make a better world." If we aren't careful, that self-image could turn to hubris – could encourage self-pride and insularity, a focus on nourishing the brand rather than the product itself (education and research). When MIT was less celebrated, we were willing to stand aside from our neighbors and peers. Recall that in the 1950s, MIT refused to supply the names of faculty whom some members of Congress regarded as threats to the nation, and again in the 1990s MIT refused to consent to federal anti-trust charges of collusion with the Ivy universities (known as the overlap case) in setting financial aid on the basis of need without considering a student's merit or trying to compete with the others for admitted students. In 1999, MIT shared with the world its study documenting widespread gender discrimination in the School of Science. Immediately upon seeing the report of this historic confession in The New York Times, some of MIT's peer institutions published vehement denials that such reprehensible practices could be found at their universities. According to their spokespersons, neither Harvard, Stanford, Berkeley, nor Yale practiced such gender discrimination. Or perhaps none had the humility and courage to take that hard, close look MIT did.
The Courage to Make Difficult Decisions
Do we have the courage today to make such difficult decisions again? Will the Schwarzman College become the impetus for MIT to offer a truly excellent education? I hope so. To educate a truly new kind of critical-thinking technologist, with a broad as well as deep education, computing will need to be integrated with just about every other subject at MIT. For MIT graduates to leave with the knowledge and resources to be wiser, more ethically-competent as well as technologically-competent citizens demands that students have more rather than less immersion in the humanities and social sciences. These cannot be requirements to get past – as they are often treated. Nor can attention to social organization, culture, and public policy be treated superficially as something everybody knows, ignoring the knowledge and expertise that characterizes the notoriously mislabeled "soft sciences."
We all seem to acknowledge that our contemporary digital world reflects certain fundamental misunderstandings of and disregard for human behavior and social organization, resulting from the actions and oversights of both its inventors and its objects/subjects (i.e., users).
By ignoring human variation, social organization and context, tools that were designed to connect people across the globe in the open exchange of ideas and information have been turned into an efficient machine of incessant surveillance, a seemingly insatiable engine of profit at the expense of other values, a platform for organized hate, and a possible catalyst for the destruction of representative democracy.
A well-educated technologist with greater understanding of the importance of context, of culture and its variations – a technologist with the ability to understand institutions and organizations – we hope would be less likely to make these kinds of mistakes.
If we, across the Institute, especially in the humanities, arts, and social sciences, take up the challenge, we may actually create that 21st century education I hope for. But, this cannot be achieved by simply wishing it to be so. Without doubt, it requires a redistribution of the current allocation of resources. Of course, we are a university built primarily on science and engineering; MIT's special mission is the foundation of all of our work here. We will not, however, be able to make that better world nor repair the problems that technologists have created if we do not provide more abundant resources for the humanities and social scientists to participate more fully imagining, developing, and critiquing technological inventions.
In the spirit of greater concreteness, I conclude this column with an example circulating around the Institute about ostensibly responsible innovation to illustrate a short sighted versus more capacious vision of a better world.
Research groups have been thinking about programming autonomous cars so that they will make "ethical" and "responsible" decisions when confronted with information demanding a distributive choice, an adaptation of the canonical trolley problem. Faced with a choice of hitting a trolley filled with people or killing a single person (perhaps a pregnant woman, a person pushing a baby carriage, perhaps a fat man whose weight can stop the car), what should the algorithm instruct the car to do? More recent discussions claim to have advanced in sophistication by moving from the dilemmas philosophers have been exploring for more than a century to questions of liability – who should bear the monetized costs of the accident? And yet, in all these projects the more significant question concerning the responsibilities of AI is ignored: why are we designing autonomous cars in the first place?
Indeed, this is precisely what I was referring to earlier when I said a good education should destabilize taken-for-granted ways of seeing, should provide multiple lenses through which to encounter the world. Why are we devoting talent and resources – including valuable and limited teaching and learning time – to this question rather than focusing on climate change, the rising seas, environmental degradation or – perhaps closer to the specific issue of moving persons from one place to another – the lack of reliable and effective public transportation here in Boston or the nation (e.g., high-speed rail)? Of course, I know the answer. Well-heeled philanthropists and corporations such as Google, Amazon, Uber, and Lyft are willing to pay for this research as part of long-term business strategies predicated on the reduction or elimination of labor costs. Where is MIT's public and historic responsibility? Where is our responsibility as educators to see the world through multiple lenses, to destabilize our own taken-for-granted ways of seeing, and to pass these habits of mind onto our students? Is MIT leading or following the nation?