Fuzzy and Techie: A False Divide?

[MUSIC] Stanford University.>>Good evening, and welcome to tonight’s program Fuzzy and
Techie , a False Divide. I’m Louis Newman,
the Director of Undergraduate Advising and Research here at the Stanford, and its my
pleasure to welcome all of you on behalf of the Office of the Vice-Provost for
Undergraduate Education, who’s the primary sponsor
of this evening’s event. I want to begin by acknowledging and expressing our gratitude to
the co-sponsors of this evening, School of Humanities and Sciences,
The School of Engineering, Beam, Stanford’s Career Education Center,
and the Stanford Alumni Association. This evening’s program
is being video taped, and a link to that tape will be
put on the website for this event. I encourage you to share it
with your friends, parents, and colleagues who may not be able to be
here to join us in person this evening. Finally, I want to give a special shout
out to Niles Wilson, program coordinator in UAR, who’s managed all the logistics
for our program this evening. We live in an increasingly techy world,
one in which our lives are more and more profoundly shaped by
innovations that touch and transform us, indeed,
every aspect of our society. In a world, and
in an academic institution like Stanford, where technological expertise and
ingenuity is so highly prized, it’s no wonder that fuzzy skills
often seem somewhat sidelined. It’s not only the perennial what are you
going to that major career question in response to a student deciding to major
in a humanities or social science field. The question is actually deeper. What value does humanistic inquiry have in a world increasingly oriented
towards science and technology? Where do fuzzies fit in a techy world? And even more fundamentally,
what is my Standford education for? What knowledge, skills, and values will
I need for the rest of my life, and how will my education
help me to acquire them? It’s the goal of tonight’s conversation
to explore these, and related questions. And I can’t think of a better
group of people to help us do that than the experts we have with us here
tonight, all of them Stanford alumni. I want now to introduce them to you, and
then invite them to take the stage for tonight’s program. Scott Hartley is a venture capitalist and
author of the Fuzzy and the Techie: Why the Liberal Arts will Rule the Digital
World, which was the inspiration for tonight’s program. His book has been featured in
Harvard Business Review, USA Today, and the Wall Street Journal. It was named a Financial Times business
book of the month, a finalist for the FT Mckenzie Bracken Bower Prize for
an author under the age of 35, and has since been translated
into half a dozen languages. It comes out with Penguin India in May,
and in paperback with Mariner Books in June. He has been a presidential
Innovation Fellow at the White House under President Obama, a partner at
More Devidile Ventures on Sandhill road. And prior to BC he worked at Google,
Facebook, and Harvard’s Berkman Center for
Internet and Society. He received his BA in Political Science
from Stanford, and then an MA and MBA from Columbia. And I should add that at the conclusion of
tonight’s program, Scott will be signing copies of his book at a table out in front
of Cemex Auditorium by the color wall. Tracy Chou is an entrepreneur,
software engineer, and diversity advocate. She’s currently exploring and advising
a range of new projects across the startup world, civic tech and engagement,
and diversity activism. From 2011 to 2016, Tracy was
an engineer and tech lead at Pinterest. Before Pinterest, she worked at Quora,
also as an early engineer. During the previous
federal administration, she was on reserve with the US Digital
Service as a technical consultant. Alongside her engineering career,
Tracy is best known for her work promoting diversity in tech. In 2013, she helped to kick off the wave
of tech company diversity data disclosures with a GitHub repository, collecting
numbers on women in engineering. Tracy is now a founding
member of Project Include. For her advocacy and activism work, she has appeared on the covers
of The Atlantic, Wired, and MIT Technology Review,
been named Forbes Tech 30 under 30, MIT Technology Review
35 Innovators under 35, and fast company most
creative people in business. She has also been profiled in Vogue and
Elle among other media outlets. Tracy is an advisor to Homebrew VC,
and enjoys working with startups on engineering, product,
culture, and diversity. Tracey graduated from Standford with
a BS in electrical engineering and an MS in computer science. She was a Truman Scholar and
Mayfield Fellow, and elected to Phi Beta Kappa and Tau Beta Pi. Marissa Mayer was president and
CEO of Yahoo from 2012 to 2017, leading the company’s push to
reinvent itself for the mobile era. Under her leadership and with a renewed
focus on its users, Yahoo grew to serve over 1 billion people worldwide with
over 600 million users on mobile. Marissa also transformed the company’s
approach to advertising, inventing new formats for mobile,
video, native, and social. And ultimately grew the company’s
new areas of investment into a billion dollar business. In June, 2017, Marissa oversaw the successful sale of
Yahoo’s operating business to Verizon. Prior to Yahoo, Marissa was
vice-president of local maps and location services at Google. During her 13 years at Google, Marissa held numerous positions, including
engineer, designer, product manager, and executive, and launched more than
100 well known features and products. She played an important
role in Google Search, leading the product management effort for
more than 10 years. Previously, Marissa worked at
the UBS Research Lab in Zurich, and at SRI International, here in Menlo Park. She graduated with honors from
Stanford University with a BS in Symbolic Systems and
an MS in Computer Science. Our moderator tonight is Mehran Sahami,
Teaching Professor and Associate Chair for Education in
the Computer Science Department. He’s also the Robert and Ruth Halperin University Fellow
in Undergraduate Education. Prior to joining
the Stanford Faculty in 2007, he was a Senior Research Scientist
at Google for several years. His research interests include computer
science education, machine learning, and web search. He has over 20 patent filings on a variety
of topics, including machine learning, web search, recommendation
engines in social networks, and e-mail spam filtering that have been
deployed in commercial applications. He co-chaired the ACM IEEE CS
joint task force on computer science curricula in 2013. Which was responsible for
creating curricular guidelines for college programs and computer
science at an international level. And for that work he received
the 2014 ACM Presidential Award. He received the 2017
CIKM Test of Time Award. Recognizing outstanding research
papers that had an important and sustained impact on
the research community. He received his BS, MS in PhD communities, degrees in
computer science here at Stanford. The format of tonight’s program is that
our panelist will engage in conversation with one another for about 45 minutes. And then we’ll open this up for
questions from the audience. Please join me now in giving a warm
welcome to our distinguished panelists.>>[APPLAUSE]>>So thank you for those generous introductions Louis, and thank you to all
the panelists for being here tonight. What we’re gonna do is Louis mentioned
is we’ll have some questions, a few topics that we’re
gonna discuss beforehand. And then we’ll open it
up to questions as well. But we’ll just jump right
into things to get started. And one of the discussion points for tonight is talking about this
notion of fuzzy and techy. So we figured we’d start off by looking at
how people define the notion of fuzzy and techy. And what connotations they carry and
why they’re sometimes viewed as opposites. So why don’t we start with that question, see what your thoughts are?
>>Well, thanks everybody for being here on this beautiful day and
joining us in this fun conversation. So the fuzzy and techie term obviously
all of use at Stanford know kind of refer to this duality on campus. Between people who study
the humanities of social sciences and those who study engineering or
technical sciences, the hard sciences. And really, I think for me,
this notion It’s sort of a false duality. Because, as any of us know, between
looking at some of the anthropological research or user experience type research. That goes into launching
any engineering product, it can be very fuzzy in some ways and
very design-centric. And then similarly on the fuzzy side,
if you’re taking a computer science, excuse me a political science class. Or your taking something in international
relations, these can often involve sort of statistical software that can
involve thinking about game theory. Or thinking about some fo
the hard elements and so in many ways it’s sort
of a false dichotomy. Because you can be a highly
creative systems engineer, you can be a highly technical
political scientist. And so I think that’s sort of
one notion that I’d like to unpack.
>>Has anybody used the term fuchey when you were here?
>>Yeah, I did hear that. I mentioned to Louis, and
he said that’s kosher for the panel.
>>Sorry.>>[LAUGH]>>We hear it now.>>He is now. I was just waiting for
somebody else to say it.>>All right. Any other thoughts on that? You sure? [LAUGH]
>>I’m pretty much just, with that we will segue
to our next person. So one of the impetuses for the panel
is this book that Scott wrote, The Fuzzy and the Techie: Why Liberal Arts
will Rule the Digital World. And so in that book, Scott,
you talk a little bit about the article or the address actually of the two
cultures that C.P. Snow gave in 1959. About the evolution of science and
humanities, and the divide between them. And you sort of see the fuzzy
techie gap as an evolution of that. Can you talk a little bit
more about that history? Yeah, so for anyone who’s not familiar
with the lecture that Neron refers to CP Snow, Scott Charles Percy Snow. He was both a physicist and a novelist. He delivered a lecture at
Cambridge University in 1959, where he lamented what he saw to be this
growing chasm between the two sides. And sort of going all the way back to the
Greeks and antiquity sort of the idea of liberal arts or artes liberales was they
kind of tug on the mind in different ways. To free the mind by exposing you to
a plurality of different viewpoints, different insights. And through the bureaucracies and
ways of running large institutions, things kinda got divied up into being
sciences on one side of campus. Humanities on the other side of campus, people sort of self identifying
in one camp or the other. A little bit of light hearted jousting or
competition between the two sides. And really, going back to 1959,
he said great thinking and innovation really happens
at the intersection points. We need people that read Shakespeare and
know the laws of thermodynamics. Or people that study biology,
and read James Joyce. And so for me, the impetus for writing the book was really looking at
the world 60 years in the future, today. And saying,
well we have all these conversations about on the importance of science,
technology, engineering and math. The importance of learning to code
the importance of big data and sort of all these topics,
artificial intelligence. Yet on the flip side, we sort of
talk about the counter point being the ethics or the counter point being
the context these vis a vie the code. And really,
it’s not about one versus the other. It’s about how do we bring
these two things together. And I think that’s where the purpose of
this panel and what a lot of our work, I think, these days is focused on. This thinking about not just
artificial intelligence and ethics in two different buckets,
like fuzzies and techies. But how do we really
merge these two sides.>>Brian?>>And along those lines. So I wanna point out another reading piece
that I would also highly recommend to you. So this is an article by Tracy on A Reform Techie (Me!) Considers
the Value of a Fuzzy Education. And I would recomment you all read this,
it’s not a long read according to, what is it, Medium? It’s a six minute read.
>>[LAUGH]>>But it is six minutes very well spent I have to say, so I’m gonna ask questions,
ask Tracy a question about that. But first, I would just want to give
you a little context from the article. So in the Medium post, you wrote,
I studied engineering at Stanford University, and at the time,
I thought that was all I needed to study. I dutifully completed my general education
requirements in ethical reasoning and global community. But I was dismissive of the idea that
there was any real value to be gleaned from the coursework. Ruefully, and with some embarrassment
at my younger self’s condescending attitude toward the humanities,
I now wish I had strived for a proper liberal arts education. It worries me that so many of the builders
of technologies today are people like me. People who haven’t spent anywhere
near enough time thinking about these larger questions of what
it is that we are building? And what are the implications for
the world? So I think that’s beautifully said. And I would ask you,
when did you become a reformed techie? And what does that mean to you?
>>[LAUGH]>>There are numerous points of realization I think
that brought me to this. One was actually just
working in a startup. At the time I was working at Quora
which is a question and answer site. And we were making decisions as engineers
about what we were building and what we thought would be good for users. And making a lot of value judgements. And also trying to hypothesize
what people’s character was like if we should assume that people
are naturally good, or bad. And if we let people do whatever
they wanted on the site, would that lead to a good outcome or
bad outcome. And so questions in the product, like
should we allow people to edit anything and then be able to revert
those changes if they’re bad? Or should we make people ask for
permission? So they would attempt to change things and
those changes have to be approved. So these different philosophies
around what is human nature like? And how do communities evolove And
how can we guide communities to go in the direction that we want,
and what are we editorializing. And realizing how much we as
engineers who write code were imposing our view of the world on
our product and our users made me realize how little I was
equipped to actually do that well. And there were a number of other instances
in product building where this sort of thing came up. One that’s more relevant more recently
is around this idea of free speech and allowing people to say things
that can be very hurtful or mean. And on Twitter, they’ve gone in the direction of
allowing people to speak pretty freely. And in this name of free speech,
have actually cut out a lot of speech from people who aren’t
safe, don’t feel safe, on these platforms. One of the first features I built
on Quora was the block button, because I was getting harassed and called
names, and I just felt it very personally. But I hadn’t really thought
about these implications. So if we design communities in
particular ways like what kind of behaviors that result. So it’s in that process of building a lot
of products that I started to think about what I was missing in my ability to build. Later on I started doing more work around
diversity and inclusion and this is another example of something that I really
had no idea about when I was in school. Even though I knew that there weren’t very
many women in engineering in my classes, I hadn’t thought about the structural
reasons why that might be. And where there might be bias,
discrimination, and these systems of marginalization. All these things that I now
am much more familiar with, I had no idea that these
existed back then. And I really bought into the idea
of meritocracy, for example. And this was something that was
very easy to buy into at Stanford, which is a very prestigious university and
the credential’s a great one. Everybody tells you, a Stanford degree is what you need to
go into Silicon Valley and make it. And I wanted to believe that
these markers I had and these credentials were the ones
that made me the most worthy. And it wasn’t until later
when I was doing all of this work on diversity and inclusion. Some of it motivated by personal
experience but some of it just looking at industry and real thinking about
what was better what was more right. Did I see how there’s been so
much research and study of all of these systems of society. And now we are seeing how they
intersected with the technology and products that we are building.
>>So, given that experience, you know you were talking
about your younger self, there is a lot of your
younger selves here and now. Is there something that you would want to
tell them that you wish someone had told you at the time?
>>I think it’s good to just be curious about the world and try to learn
about the systems that surround us. I don’t think it’s just
about taking classes. I was forced to take some number
of humanities classes but I didn’t appreciate them. So I don’t think the solution is just
take a bunch more humanities classes. I think there has to be
a deep appreciation and a curiosity about understanding
the world that underlies a true education in the humanities, liberal arts.
>>Nice, so if we think about putting those things together and turning it into
a career, to turn this over to Marissa. You’ve been a VP at Google, CEO of Yahoo. Hiring, I would imagine, is probably
one of your most important decisions. And you’re also hiring people
who are not just, say, working in the trenches, but also leaders. What are the kinds of skills you look for
when you do that?>>Well, certainly, I think one of the things
is that you look for people who can see
the connections between things. And having seen something once,
even in a very different setting. They can see an analogy and
something else so you look for people who can see the connections and
I think people who are well rounded are cross fuzzy and
tacky often exhibit that better. And to the point of building systems
you really want someone who can also empathize. Right then you have this idea you
know when this gets from someone how are they going to react to it. Where will they look, what will they do. So you’re designing something that
ultimately really fits that need. But there’s also this revealing
experience I had here that goes back to sort of
broad general requirements. One of my favorite classes I took while I
was at Stanford, which is really random. My mom is an art teacher,
my father is an engineer, so I’m always growing up with a little bit of
a balance of fuzzy and techie, systems. But I was thinking the other day about
the broad requirements and one of the, I think [INAUDIBLE] at the time
requirement number six. I took The American Musical, which I’ve
always loved Broadway musicals but had like no bearing on
anything I did in tech, nothing even on the soft
side of symbolic systems. But it’s funny because I
actually think about that class, American Musical all the time, and one
thing that we did is we actually looked at a musical from each
decade throughout the 1900s. And you could sort of see how it moves from being
the South Pacific to being Rent, and ultimately Hamilton, which obviously wasn’t
something that we studied at the time. But one of the things that I see
an analogy from is something where you can see user and
audience expectations are always rising. It’s really interesting to sort of for example look at like
a musical from the 1950’s, the things they would have to spell out
in song, in sets, in costume changes. She’s worried she’ll never see him again. Today’s theater goers would just see that
in a sideways glance and a pensive look. But they had whole scenes in South Pacific
to try and express that emotion. Similarly, one of the things I, the only
reason I think of that is a lot of times when you’re designing and you’re trying
to pull together that technical element, that empathetic element. You also have to keep in mind the fact
that the audience expectation, how sophisticated something can be, how much
they’ll still get it is always rising too. So it’s interesting I think that one
of the strongest things that comes from a well balanced education
is the ability to connect. Now I’m somewhat in Scotts camp
that there is a false divide but I think anything more what we
are seeing is a convergence. I actually think that
the two disciplines or the two sides of the [INAUDIBLE]
which used to be farther apart. And now they are coming close together. The classic example of this at Yahoo!
was when I got there everyone liked to debate it was like the employees and maybe
Silicon Valley’s favorite parlor game was is Yahoo! a tech company or
a media company? Or a media influenced tech company or
a tech influenced media company. [LAUGH] Right,
it was like we had to say wait, in some ways it was just a sort of
interesting version of naval gazing. What does it matter, we need to build, be a company that builds something that
people really want and use everyday. And let’s focus on that. But one of the things I also point out to
people is it’s very hard to even look at classically technical companies like
Google and Facebook and even Amazon. And say that they now,
with all their media arms, that they don’t have a really
heavy media component. If you say sort of media
is more like fuzzy, there’s a lot more creative
work happening in it. And then there’s obviously the tech work. And now, a lot of the classic media
companies have a very very strong technical edge in bend. And so I think there’s actually
a convergence happening and people who pay attention to both
sides are well positioned for that.
>>Nice, so I guess one of the things you referred
to was being a symbolic systems major, which is kind of a curricular kind of
way to think about that convergency. Tom is here,
who helped create that program, or led the creation of that program. Scott refers to it in his book as one of
the special unique programs that Stanford actually refers to you as one
of the graduates of that. And so in thinking about symbolic
systems not just as a major but as a way of helping foster this
convergence in sort of a curricular sense. What would you think from
when you did that program, what was it that you think helped you in
launching your career and then your career beyond first getting out of Stanford.
>>Well I think that one of the things that I made a decision that I
didn’t want to go that deep. I know that sounds funny as an undergrad
I figured I would specialize in graduate school. Like go on and do a Master’s or PhD and then go deeper there and so
I definitely wanted that breadth. I wanted to be able to go broad and
not specialize on much. But I will refer to Professor Russo,
and I’ve told this story before. I knew, I did computer science
the previous year to fill the requirement. And I obviously felt like I knew a lot
about philosophy and psychology, but I didn’t know much about linguistics. And so I discovered Symbolic Systems
when I was on the plane back from my sophomore year of school. And I saw it in the catalogue and
I thought it sounded really interesting, I liked a lot of it. But didn’t know know much
about linguistics at all but the head of the department is teaching
introduction to syntax that fall. So I went, I met with Professor Wasou and
I knew he would try and sell me on the program. He’s the head of the program,
he’s obviously gonna try and sell me on the program on his class. And I said, I read about
Symbolic systems over the summer, I’m thinking about becoming
a Symbolic systems major. Do you have any, I’m gonna take your
Introduction to Syntax this fall, do you have any advice for me? And he was like, you should absolutely
be a Symbolic Systems major [LAUGH] all the most interesting
Stanford students are. [LAUGH] Which is kind of like this
testimony almost like if you weren’t in Symbolic Systems, you weren’t interesting. Or you were as interesting
as you could be. But I will say I mean there is a lot
of amazing things about the program I do think. Another class I always think about is
in Intro to Syntax you go through both a descriptive and the prescriptive
version of the English language. And how different they actually are and how you’re supposed to talk
versus how people actually do. I think about that a lot too, in terms of how do we prescribe technology
versus how it actually gets used. But I think that there’s a lot of
wonderful things about the program, but one of the most wonderful things
about the program was that there was this wonderfully diverse
group of students. And like, even the people who
were like in my honors thesis. Some people were doing like, one of them
had dancing, musical, digital puppets. Another one was trying to do
stereo vision with $100 cameras. They were doing really,
I was doing a natural language system for travel planning. We were all doing
radically different things. But it was really interesting to try and
see the connections between them and be in a room with people who were doing such
interesting projects and interesting work. Even if it was really [INAUDIBLE] from year round.
>>And then it seems there’s
a collaborative project tier, but then those scales also
translate into careers afterwards. And that’s one of the things you
talk a lot about in the book, Scott. That you’ve shown a lot of examples of
people who have this background that were able to bring together the fuzzy
side of things within solving problems that involved, technical
expertise as well and bring that together. So what are some of the examples that
you would want to share of some of the, collaborations on how you
think they were made to work?>>Well I think it’s, in some ways we have this misnomer of
thinking of technology companies as tech monoliths. Cause even the insights, the core insights
that have led to the virality of Facebook, explosive growth was basically a
psychological insight of photo tagging and drawing people in through that hook. Or Snapchat in many ways,
I think had a sociological insight about photographs as a means of communication,
as pictures of thousand words. And so it’s interesting to look at these
tech companies through the lens of how they’re also sociology companies,
how they’re also psychology companies. And there’s really a place for
all of us in these places to play a role. In the book, I mean I highlight
because my role previously has been as a venture capitalist. So I’ve dealt a lot with
technology founders. And the observation that I had
In the VC seat was that so many of the really interesting companies
tended that I was meeting with, they were founded by people from
all these different walks of life. And they said, what is it about
the Symbolic Systems major in particular. There were these amazing people founding
great companies, like Mike Krieger and Instagram, or Chris Cox who is
the Chief Product Officer at Facebook. And I said, they really have this insight
to kind of straddle both worlds, like you mentioned sort of logic, psychology,
philosophy, and computer science. They’re technical enough to be dangerous
and to speak the speak, talk the talk, and hire the team. But then they maybe also took
a theater arts class where they’ve got this incredible theatrical bit to them,
where they can sell a product and communicate a vision. And I found that some of the really
fascinating companies were the for the ones that struttled these two sides. So one in particular is stitch fix, it’s
one that I talk about a bit in the book. So Katrina Lake, she was Stanford class
of 05 and she was political science, economics manager. And she went to consulting after school,
she worked in retail consulting, really got to understand
our fundamental problem. And she said I’m doing all this
consulting work, I have no free time. I go home and I have Netflix, which seems
to understand what my preferences are and it can serve me movies really effectively. What if I had Netflix for my closet? I have no time to go shopping, but what
if I could build Netflix for my closet? And so, it was really this insight
of understanding the problem and then understanding that it was gonna
have a heavy logistics element. And a heavy machine learning element. She was able to go find those perfect
people and then convince them, sell them her vision. And obviously fast forwarding five years
she’s been able to create a billion dollar company that’s public with 4000
employees that really sort of blends, I think, this approach to taking machine
learning, a very technical subject, applying it to retail. Which is something that she learned. But then hiring the whole team of 4000
fuzzies that are basically human to human interface designers that are communicating
the product on the front lines. And data scientists kind of
behind the scenes that are making the pipes to kind of create the queues
of items that go to those 4000 stylists lists that then
get passed on to clients. So I think Stitch Fix is an interesting
example to me of a fuzzy, techie company. But again I think when we look kind
of beneath the hood of a lot of tech companies they’re fundamentally also, psychology companies also,
sociology companies. And it’s sort of the vector through which
we look at those companies that makes it interesting.
>>Interesting cuz one of the things that you alluded to, that Marissa had also mentioned before
was, the notion of empathy, right. And this notion of being able to emphasize
with the user to understand what they want and see what the problems are. There’s also the notion of empathy, of thinking about what kind of culture
are you fostering in your company. And since you mentioned VC’s I wanted
to kind of they’re taking me in a little bit of a turn. Because recently there have been some VCs
that have kind of been on the hot seat for not such a great environment
that they’ve created. And, I think in that regard, Tracy,
you’ve been involved in project include to actually promote diversity
in the workplace, and I was wondering if you could
talk a little bit about that. And, what were the motivations for
doing it and what the goals are?
>>So project includes the nonprofit I co founded a couple of years ago
with seven other women in tech, around the idea of increasing diversity
and inclusion in tech startups. And what we are seeing is, in the last
few years there’s been an increase in the discussion of the lack of diversity,
and all the implications that has for
the tech industry and for the world. If you look at the products
that are getting built It’s easiest to solve the problems
that you yourself have and only the people that are working in tech,
come from a very singular demographic. A lot of times of relatively young,
white and Asian men in urban areas
with disposal income. That’s where the products that get built
are the products that serve those people. And in addition, on the VC side, a lot
of interesting ideas not getting funded. We’re seeing increasing discussion about
the lack of diversity and inclusion. Some of it is very obvious
if you look at the room and if you look at the numbers,
that wasn’t there. The more people were starting to get
interested in solving this problem and didn’t know what to do. We felt like it would be
useful to put together basic handbook of these resources so
start ups would have somewhere to go. We also thought that tech start
ups were a really good place to be focused because
there’s more leverage there. It’s much harder to shift
a company of tens of thousands or hundreds of thousands of employees Towards
a more inclusive culture than it is to get someone on the right
path from early on. So project include launch is just a set of
recommendations, everything from defining culture and implementing culture,
to manager training, resolving conflicts. And there’s a section on there for
BCs that want to be more diverse and inclusive in their own partnerships,
as well as in their investments. We wanted to put out these resources to
try to guide the industry towards these solutions.
>>That makes sense, and it seems to sort of follow with, there’s
a large body of research that says, if you have more diverse teams, you sort
of get, which are solutions or a solution space that attracts more people?
>>Yeah, specifically in the space of innovation, you want to come
up with creative solutions to things. When you have more diverse teams,
you will have better outcomes. And this has been validated
through decades of research. They’ve been able to look at
correlational data by public companies. But also in the research setting, forcing teams to be diverse has caused
them to have more creative outcomes. And I think intuitively, that makes sense, where when you are confronted with a room
full of people that are different from yourself, you no longer assume that you
are operating under the same assumptions. You work a little bit harder
to justify your opinions. And this is true,
not just of demographic diversity, but they’ve seen it with rooms full
of Republicans and Democrats. And just knowing that people around you
are different forces people to be more creative.
>>So in thinking about that, bringing those folks together, all of you have had lots of experience
in various parts of industry. And so I think for
some students who are here, they sort of see this straight line
from major in computer science or an engineering field into going
into a specific job in high tech. And so sometimes, the culture on campus
may not make it as clear for someone studying the humanities or more of a fuzzy
what those kind of opportunities are. And clearly you’ve worked with
people with diverse background. So kind of generally, what are the examples you’ve seen of folks
who came from these fuzzy backgrounds? And you’ve already
mentioned a few of them. But if there was more that you wanna
highlight, who you thought made important contributions at the places you were at.
>>There’s one good example of, I think we were talking earlier about
how we tend to lionize the end product. And we say, gosh, they had this crazy vision to see all
the way to the end state at the beginning. We forget that everyone sort of launches
and iterates their way to success. And so Slack, for example, Stewart
Butterfield has not just one degree in philosophy, but two degrees in philosophy. And he’s been public about talking about
the value that he thinks, obviously, serial tech entrepreneur founded Flickr,
went to Yahoo, right? And then Slack,
which was originally Tiny Spec, which is a gaming company that sort of
had a side car product that was for communications within
the engineering team. That they said well, this gaming
company’s not really going anywhere. Why don’t we invest more time and
resources in this communications product? And that sort of segues over to Slack. And so I think he credits, to some degree, obviously being technical
enough to be dangerous. Not intimidated by technology, sort of
getting in and rolling up his sleeves. But this ability through philosophy
to deal with ambiguity and gray area questions and sort of follow an argument,
interrogate an argument all the way down. Which in many ways, I think, is the job
of a good product manager, a good leader. To sort of tease out what
may be the right answer, but you’re not confident
with 100% certainty. You’ve gotta wade into that water
with some level of ambiguity and be comfortable in that zone and then be able to communicate those
ideas clearly and effectively. And so those are two qualities
from a philosophy background that Stewart Butterfield credits as
being part of his success is, one, being able to write clearly and
effectively. And two, being able to follow an argument
all the way down to its point of logical conclusion, but it’s still not perfect and being comfortable with that.
>>Well, I think for most majors, there’s kind
of a natural pace you might end up, and I’ve seen strong examples
across all kinds of companies. Everywhere from, a lot of people who are
strong in writing end up in marketing or PR because they can
communicate really clearly. And economics ultimately does really
well in the finance departments. There’s tons of opportunities,
cuz these companies, you actually look at them
at the end of the day. A lot of times, it’s 10% or
20% of the workforce that’s technical, and it’s actually the vast majority that are
really making all the different elements of the company work overall. And so I think that that’s
a really important element of it. Visual design, and actually one of the best usability
analysts I’d ever worked for. I worked with her both at Google and
at Yahoo, was a PhD in Psychology. So she was amazing at going in and
doing user studies, analyzing log data, understanding what was happening with
the user, what was a biased experiment, what was an unbiased experiment, right? Why things might be happening the way
they were happening on the site, and she did an amazing job,
but she was very firm. I remember in the first interview, she was
like, I don’t design, I don’t write code, I’ll never do a markup for you. But I can tell you what’s going on
with your users and I can tell you, in a perfect clinical psychological way,
how to test for and understand it, [LAUGH] so.
>>One of the things that I read, that Eric Schmidt commented on for
success at Google, was not related to major, but
it was related to two characteristics, one being persistence,
the other being curiosity. And I really like that, because I
think it cuts across all disciplines. Recently, I was talking at a large
investment bank in Boston. And they said, as we look around
the table of leadership at our company, we singularly hire who comes in our door. And they seem to be finance majors,
accounting majors, people that studied, to Miriam’s point,
sort of the A to A relationship. They have a degree title and
they see a job title and they say that matches what I have,
I’m gonna apply there. And yet, the senior leadership at this
bank was predominantly people who had studied English and
philosophy and history. And they said what’s going wrong or what’s
happening in our organization where none of these people are rising
to be around this table? Do we need to hire more of us, or do we need to have more membership
programs to keep people engaged? And they sort of arrived at the same
conclusions as Eric Schmitt, that there is something that led to
a 20 or 30 year career at this bank, where it was fundamentally
based on the curiosity and sort of this continual re-investment
in the learning process. To have the curiosity to see what
is it about retirement savings or the market dynamics today. And they were continually
engaged in the idea of markets, not just the sort of day-to-day
mechanics of the business. And so I think that those are all sort
of skills that are readily applicable across a lot of different
industries that we think about.
>>And there’s some that
are common misconceptions. I think Eric Roberts always talked about
the fact that there was a grid study that he described that talked about
the fact that programming ability was actually more highly correlated to
verbal SAT scores than math SAT scores. That basically once you sort
of got into the 700s in math, most of those people have a good enough
amount of skills to be good programmers. And what actually divided sort of
the 95th versus the 99th percentile programmers was actually their verbal SAT. It was much more highly correlated
because they were able to break down problems more, communicate
more, I mean, they wrote code in teams. Other people could understand how they
were approaching it and how they had laid out the algorithm better.
>>Yeah, that’s definitely one of those things that gets
overlooked more in the academic setting. But when you go into the industrial
setting, the importance of communication and working in teams,
people being able to get along. So to get into the little bit of the nuts
and bolts of the industrial setting, and I’ll quote a figure that’s
a little bit shocking. Let’s delve into that. So according to the Stanford Computer
Forum, the average starting salary for Stanford CS undergrads
in 2016 was $110,000. So there’s lot’s of techy careers that
are sort of seen as lucrative, and sometimes they’re seen as an inequality,
then, as a result of fuzzy careers. So what do you think of in terms of
when you think of one from the nuts and bolts side, the earning potential,
in terms of fuzzy majors. But secondly, and
probably more importantly, how do we address that inequality, if
there is an inequality, in the long term? Not the softball question, by the way.
>>[LAUGH]>>One framework, which I think is interesting, is we all
sort of think about our education and platform as it’s a consumption product. It’s what do you love to study,
what do you love to do. It’s an investment, obviously,
it’s a lot of money. And then, to some degree,
we think about it as an insurance product. Am I gonna be relevant in 10 years or
20 years? We read all these articles
about automation and AI. And is there even gonna be a role for me? And one of the most interesting, I think, counterintuitive points that I like to
point out is that as machine learning and as automation sort of takes over some of
the rote aspects, the scripted aspects. The things that we’ve done over and
over again, what actually rises to the surface
are a lot of these human skills. A lot of these soft skills of empathy, as Marissa mentioned of communication and
collaboration. Things that we haven’t seen before so
being able to improvise and be flexible. But to the nuts and bolts, I mean, the
$110,000 was certainly more than I made when I came out as
a political science major. But then,
as you sort of follow through your career, you realize that the skills
that you’ve built, there’s a great quote by a behavioral
psychologist, BF Skinner, from the 50s. Where he says, education is everything that you have left
over when what you’ve learned fades away. And I think it’s really an interesting
idea because we all have taken a million classes. Yet you can count on two hands probably
what you remember about a specific class a decade, two decades ago. But it’s that core curiosity, the persistence, the grit,
some of those aspects. Those are the things, the elements
of education that stay with you.>>It’s very anecdotal and it’s humorous. And you have probably one of the best
career moves I’ve ever seen, but one of my early colleagues at Google,
he actually joined before me, he was one of the people who recruited me
there, was a guy named Salar Camangar. And basically he graduated from
Stanford with a degree in biology. And he had been using Google and loved it. And so he biked over to Palo Alto,
rang the buzzer at Google, managed to get himself upstairs,
and talk to Larry and Sardane. He was like, I just love this website. I use it all the time. I just think it’s totally amazing. I just wanna work here. And Larry and Sardane were like, that’s
awesome, we would love to have you work here, but you can’t code, so
we can’t pay you, [LAUGH] right? And so Salar was like, that’s okay. I’ll work for free,
I’ll just show you how valuable I am. So literally, he just started showing up
to work, unpaid for the whole summer. And in the end,
he became Google employee number nine. Now he heads up some of Google’s
really important venture efforts. He headed up YouTube for a lot of years. He’s compensated every bit as well as
the early engineers were in the end. But now he knew that he had
the persistence, he had the curiosity, and he had skills that he knew would
ultimately prove really useful. And he saw an idea that he
was really passionate about. He knew he was passionate about
the Google technology and was like, I’m gonna actually bet on myself that
I can make a big impact here and contribution that will be recognized,
even if they’re not recognizing me today.>>That’s actually funny, that person at Facebook, I think
she’s still there, but Naomi Glight. She was my class at Stanford 2005,
and she did the same thing. She bikes down to Palo Alto,
she showed up at Facebook. She said I really like this blue and white
website that you’ve got going on here. I wrote my science, technology, and
society senior thesis on social networks. I’d love to work for you. I can’t code, but as of today, I believe she’s the number one oldest
tenured employee after Mark Zuckerberg. Cuz everyone else in between has
left to go and founded Quora and other platforms.
>>I do think it is undeniable that there is a huge gap in compensation. And if you study computer science or some
technical field, it’s a lot easier to get a job because there is a straight
line between the degree and roles that need to be filled. So the US Bureau of Labor Statistics
projects a short fall of a million jobs in computing
that will be unfilled by 2020, cuz there’s just not enough people
who are studying the technical field. So when you look at the supply and
demand mismatch, if you study something technical and can do technical
work, you’ll get compensated really well. And these differentials are very real. When I joined Pinterest, for example,
there was another woman who joined, same time as me, same year of graduating from
university, but she was non-technical. She got one-tenth the equity that I got in
Pinterest because I was an engineer and we needed engineers to build the platform. And the core value prop was technical,
and she was working in HR. And she was making the office run, but there are many people
who could do that job. So when you look at supply and demand,
that’s just what the market netted out at. And that’s still the case when you look at
the difference between the technical and non-technical roles. So, I think it is really great
to celebrate the people who made their ways and found ways to
be really helpful and useful and became leaders in these tech companies
without technical backgrounds. But I think the reality is still,
there is this big supply demand mismatch. And some of it is that technology
companies and technology industry devalues the humanities and
devalues things that aren’t technical. But there’s also the practical realities
of building these companies and what roles you need there.
>>And I do think it’s also worthwhile in pointing out while the income
disparity is that first offering, there is this disparity,
you can’t deny that. Sometimes those very well paying
jobs are not that interesting. Right, so, regardless, you should go over
to a job that you feel is interesting and inspiring for you. I remember one of the more interesting,
I ran this program at Google called the Associate Product Manager
Program, it was a two year program. We hired really high potential people who
are good at understanding how technology would be applied. And there was a student at MIT, of course,
right, boo, hiss, [LAUGH] right? But MIT,
who I really wanted to join the program, but I paid all the associate
product managers the same. I think that at that time, they all
got $80,000, the same amount of stock. And we said, look, based on performance, we’ll differentiate you later
with refresh grants and bonuses. But upfront everyone gets the same offer. And he was like, well, I had this job
offer from Goldman Sachs that’s $130,000. And this was probably in the early 2000s,
so it was a lot of money. And he was like, and so
I’m graduating from IT. I just need to be making six figures. I can’t take a job that’s not six figures. And I was like, well, my problem is, I’m hiring a class of people
that are all coming in together. I don’t know who’s gonna
be better than who, and I’m just not comfortable making
an offer where we differentiate. So, if you wanna bet on yourself,
come in for this offer. You might be disproportionately rewarded,
but I was like, but also, no offense to Goldman Sachs,
cuz I know there are other jobs. But this particular job, I was like, tell
me about this job at Goldman Sachs, and he told me about it, and I was like,
what are you gonna be programming in? His answer was like Cobalt. And I was like, who else is on the team? And this was the first new grad that
they had hired in that team in 25 or 30 years, right? And so I was like, at the end, I actually
made peace with the fact that he did, obviously he did choose
the Goldman Sachs job. But I was like, look,
if writing Cobalt and trying to fix bugs on 30 year old
programs floats your boat, go for it. I guess you can go and
make a lot of money doing that, but it’s probably gonna be less inspiring and
interesting in the end. I think another point, just to jump
in there too, when we think about, I know you study literature,
you study Russian literature, you don’t need to have read and
memorized every book of or Tolstoy. You can just sort of know
where the big pieces are. And I think similarly to be technical
enough to participate in this world. You need to sort of know where the pieces
are, and the languages are moving so quickly that, and there’s so
many new libraries and new frameworks, and TensorFlow, and
something else is coming next week. And so I think there was an article in
TechCrunch a while back about becoming a full stack integrator,
and I love that term. I’ve sort of,
knowing the building blocks, and the building blocks are kind of coming
in these larger, and larger, chunks. So I’d say, learn enough that you’re,
to the extent that you’re passionate. But then follow your passion because
there’s all these different cuts through the material. And it should be unlocking,
breaking the ceiling, breaking the ice on being technical certainly is helpful
in our kind of technology like world. So the book that I wrote is very
much pro technology from my perch in Silicon Valley as well. But I think it’s unlocking this other side
of when we think about machine learning, when we think about big data. We tend to look at this notion
that we have got sensors that are becoming cheaper. We’ve got them baked into our fabric,
into our cars, into our walls. With more and more data actually
going all the way back to Plato. People said with more data
there’s more information, there’s gonna be more knowledge and
more wisdom. And actually maybe that’s
true on some small level but really it’s about asking the questions. And so getting at these questions from
different angles, from perspectives, from different backgrounds. The goal become more more important as
these chunks gets bigger and bigger and not everyone’s gonna build
the machine learning algorithm, you’re gonna pull from a library,
or you’re gonna tweak it. Maybe two engineers will tweak it but then a bunch of people will ask questions
of it, make sure it’s free from bias. Those are different perspectives sort
of on learning enough to be dangerous but then you don’t need
to know everything about everything.
>>Well, respect to learning might be dangerous and
a lot of things Tracey you said, you said that tech industry
devalues humanities. And I just wanted to follow up on
that point cuz I’d add the clause given recent events,
comma perhaps maybe to it’s detriment. And one of the articles that we were
talking about before this panel is there’s an article in the Boston Globe by Yonatan
Zunger who was a colleague at Google for a few of us, talking about the Facebook
Cambridge Analytica debacle that happened. And the importance of ethical
consideration in technology. And so, the question I wanted
to turn around first to Tracey. And then to the rest of the panel,
is how do you see that interplay between a humanities education and then going out
to the workforce and making sure that what happens even in tech companies is
following the guidelines that we would think as ethical or reaching a societal
outcome that we would like to have?>>I think someone just has to be driven by society at large, and
if we acknowledge that we’re operating in a very capitalist society right now,
and companies optimize around making money, it is very easy
to trade off things incorrectly. Cuz I think some of what
we’re seeing now around this Facebook Cambridge Analytica
debacle is that consumers are upset and don’t like what’s happening and that’s
putting some more pressure on Facebook. Not just on the consumer side, but also
from government and like the hearings that they’re having in front of Senate
to stir in a better direction. But I think it’s very hard to
expect that in a world that is so capitalistic that companies
will not necessarily just move the most ethical directions.
>>Yeah, it’s interesting with the regard to
the sort of the nudges that happens. We talk a lot about the attention
economy and focus on the advertising as a business model and optimizing sort
of technology’s not agnostic on it’s own. We’re AB testing to optimize for
certain outcomes. And those outcomes can be
maximizing ad revenue. But they can also be on the flip side,
these different types of nudges or technological paternalism. You’re basically outsourcing your
values for somebody else to decide. But those decisions,
I think the really difficult thing is, some of them may be good. For example, Omada Health is a company
in San Francisco that uses a lot of behavioral interventions to nudge people
to be more active in their lifestyles, to get them off the couch. Because if they lose weight,
lose body mass index, they have a lower chance
of getting diabetes. And so then the question is, well, if that’s the outcome we’re AB testing and
heavily optimizing for, versus an ad click, is that then fine but
the ad click is not fine? So who’s sort of making these
value determinations, and these are just big
philosophical questions. There’s no right or
wrong answer necessarily but I think these are the type of issues that
society seems to be now grappling with and I think the realization that these
are not just purely technology issues. They’re also a whole bag of worms.
>>I personally am just not sure there’s as
much of a connection there. I think that there are,
obviously some promise are made. I wish I could say well if there’d been
a huge infusion of more fuzzies and ethics into Facebook it
would’ve gotten solved. But I actually think
that when you look at it, it’s basically that these
products are really in deep. They’re very sophisticated and
complicated. There’s very, very smart people working at
Facebook and they stopped there, looked at it, brainstormed how it could be used,
the applications that could be built. And it seemed like, they would probably
build pretty interesting things with it. Two years later, that API,
they decided, okay, it didn’t actually really come to
to force the way that we thought. And then they had someone
abuse their terms of service. And I’m not sure that there’s a lot that
could happen that could help that, other than people getting a lot more versed
in the, sort of, the discipline of it. It’s interesting, I talked with
Reid Hoffman, who is also, coincidentally, a symbolic systems major. And LinkedIn, whatever reason
LinkedIn didn’t follow that suit and roll in a similar API. But, actually, I wish I could say it’s because of
his broad background in education. I don’t think it was.
I think it was just that, once you develop and you’ve worked in
social networking for a long time, and thought about what that data is,
the value it has, how it can be used, how it can be misused you
get a better feel for it. But right now, we’re kind of the first
frontier and the first ending so to speak, and people are just sometimes not that
aware of the way it could be abused or misused later on. And even if you got people who are,
I mean, the people at Facebook and the people at LinkedIn are as
skilled at this as anyone can be. And it’s even,
even we will say like it wasn’t so when that API came out
it was like interesting, like I don’t think we should do it but
he couldn’t even really articulate why. Right, and so when you got people skilled
in the art and they can’t really agree on whether it’s a good idea or a bad idea or
even for season of the abuses. It’s really just an issue that we’re just
too new in that discipline or too new in that field to really see the problems.
>>So I think there are those
cases where it’s hard to anticipate how a product might get
used or what impacts it might have. I think there is also the question of once
you do see something heading in the wrong direction, are you able to
pull that back and address it. And there are examples of companies
that have behaved very unethically. And people who should have known
that what they were building or working on was unethical and
not stopping it. So I think there are many
examples coming out of Uber. These sorts of tools that they’re building
to track people or evade law enforcement. It seems that the people who are working
on those should have said no. I think Zenefits also built
some software to evade Or to bypass certification
processes in different states. And it seems like there’s a void of
morality that seems to be happening in some of these tech work places,
as they’re building products, so there are the cases where it’s just
impossible to know that Facebook’s engagement algorithms would cause
the sort of like clickbaits. Like these are holes that people will go
down and then tip the election possibly. But there are also cases
that are pretty clear and if they had seen things
trending in that direction, why didn’t people pull back then?
>>Yeah, and it’s, as these technologies obviously
go off the screen into the car. Off the screen and
into many different folds of our lives. From Amazon Alexa in your house where If something happens can those
records be subpoenaed by a court? You know there are all these big
kind of questions about justice, about we sort of point the finger at
social media today, but we forget that in 2010 it was the rise of the Arab
Spring and sort of democracy promotion. In 2008 it was the platform that gave
Barack Obama a chance at the presidency. So we’re very easy to flip
the tables in to impune the motives. But again, I think their are very smart
people asking good questions and it’s certainly not based on what they majored
in that is why they succeeded or not. But the focus I think on these issues from
a macro level is interesting because it’s bringing this sort of liberal arts
ideas out into the public forum, where at least we’re grappling with these
really tough questions in public and that’s a healthy debate. Now there may not be a right or wrong
answer, but the fact that we’re debating these things in public is just like
having a huge political science class. A huge philosophy class, that’s on the front pages.
>>One thing I really liked in that article you referenced by
Yonatan Zunger is he talked about how in some other domains, scientists have had to think about
the ethics of what they’re working on. So if you look at chemistry or
physics, where, so the work that they have done was turned
into weapons chemical weapons and nuclear weapons and scientists doing this
research had to ask themselves if this was the right thing to be working on you could
talk about inventing things in the name of science but, knowing that what they
were working on could be turned into weapons of destruction or death, like they
had to think about those consequences. We’re just starting to see much more
of that in software engineering, in the tech industry, where it is possible
to build these recommendation systems but, knowing that they could be used in
particular ways, it may be instructive for us to now ask the questions of like how
will this be used and how should I inform what I actually work on.
>>Yeah, it’s interesting that you mention that. There’s actually a great documentary,
not just good but great documentary, I’ll put a plug in, called The Day
After Trinity by Director John Elsa. And it chronicles
the Manhattan Project and actually many of the physicists involved
with the Manhattan Project went on to become peace activists,
after the project was over. Because you’re all Alumni of Standford. What’s the one thing here that you
wanna share with folks in the room that was you know, either curricular or extra
curricular or whatever, that impacted you on an unexpected positive way?
>>[LAUGH]>>There were none. No, I’m sure you were thinking.
>>[LAUGH]>>I think one thing for me is that, I had one close friend of
mine who, we were standing in front of a map in his room and
he said a counter intuitive point. Which was that, life is long we’re
going to get to all these places. And it was the opposite of what normally
you hear people stand in front of map and say life is short Where do you wanna go? And I love that perspective. And I think taking a long view on your
friendships, on the people that you meet, on your education, on your professors and
really get to know a whole set of people. Because I think for me, I’ve got sort of unpredictable friendships
with people from many different parts of. One particular class,
like Tristan here has mentioned. We actually met in Egyptian
history class of all places. And he’s still a close friend. So had I not taken that class I
would never have met Tristan. So there’s interesting kind of cuts
through the university I’d say. Take the long view.
>>I had one very interesting experience in computer science. So I started off thinking I was maybe
going to major in computer science, took a couple of course and then decided
to not major in it because I was intimidated by my classmates and
felt like I was never going to catch up. I took CS106B and 107 and
after that I just I’m done with CS. And, the next quarter after
that the lecture of Watoshi is one that happens during Cambridge and
talked to me and asked me to TA for him. And my media response to him was he must
be confusing me for somebody else because I didn’t do very well in your class,
I’ve generic Asian sounding name so maybe you mixed me up with another asian.
>>[LAUGH]>>And like switch names So we went back and
forth probably to be safe. I don’t think you actually want to invite
me to TA because I don’t know this material and him saying I know who you are you’ve been
to my office hours like I want you to TA. And he ultimately took responsibility like
if you’re a bad TA it’s on me because I am the person teaching this class so
you should just do it. And I did TA for
107 two quarters after I took it and the change in perspective
was completely my. So as a student I thought that I
was doing very poorly in the class. I heard all my classmates mostly male
talking about how easy it was and this was supposed to be a winner class,
that it was really not that hard, and me feeling like I was just, Incompetent. And when they talked about how long
it took them to do the assignments, it took me three hours to do
that Kevin Bacon assignment. And I was thinking it took me 15, so I was
quantifiably five times worse than them.>>[LAUGH]>>I realized when I was TA and I had to grade their assignments that
people’s calibration of their own performance to their actual
performance was not very good and there was definitely a lot of people
who thought they were amazing that, they’re fine, and
having that different perspective, knowing that I had done all these
assignments myself two quarters prior and to seen the code quality of the students I
was grading and Being able to actually see how I would have put myself if I had to
rank was just a really useful perspective. Switching from being a student
to teaching assistant taught me a lot about how the world functions. And also, there’s some gender
differences at play here. But that perspective shift
was really valuable, and it’s also been something that’s In
a really useful, in the workplace, and understanding how differently people
can perceive the same situation. How they self describe and
be very different from the reality on the ground.
>>Well, I didn’t know that Tracey was
also A Gerry Kings T.A. as was I. So maybe the tip, the pro tip,
is being Gerry King’s T.A. [LAUGH] And then but, trying back
the the panel, what I would say is. They don’t bow to people’s expectations. I think that when you look at fuzzy and
techie, you might be like if I am a fuzzy or
if I am techie, people expect this. And I think that was one of the best
decisions I made, when I was at Stanford, a lot of it was not,
because it wasn’t that expected. I knew I was a lot stronger in science and
math, yet coming in as a freshman I took
a structured liberal education. Which I figured that I
wouldn’t be that good at, but I knew that it would really push me. I also was certain that I was
going to become a doctor, it wasn’t until the conversation
with Professor Wass and others that I decided to go and
do another thing. I always keep questioning,
are your expectations of yourself and others people’s expectation of you
that relevant or that right-on I took, my first CS class was actually CS 105-A,
I don’t even know if it still exists. It’s computer science for non-majors,
that was actually the title of it. And I did it, I took it freshman year,
spring, just to fulfill a requirement. And Professor Plossing opened the class
the first day and said, to be clear, we’ve taught this class for ten years,
we’ve done studies on everyone. It was a 400-person class So
we’ve done studies on everyone, and we know statistically that exactly two
of you will go on to take any other computer science class ever. So we’re gonna keep this easy, we’re gonna
keep this simple- [LAUGH] Cuz I know that you’re just trying to fulfill your
requirement, and get on with things, and that was literally my first CS class. And if I had just done Professor Klausing
set as the expectation my whole life, would not have unfolded in
the way it ultimately has. So I think that notion questioning, even the expectations your hearing
from authority figures and that sort of divide is a good thing
to be constantly questioning. Are those expectations of what a fuzzy is, a techie is, What should I be
doing right now ruin that. Wonderful, well I’ll take a moment to
thank our panelist thanks very much. [APPLAUSE]
>>And thanks also to Lewis Newman and
Niles Wilson and the UAR program for putting this event
together thanks very much.>>[APPLAUSE] For more please visit us at stanford.edu

, , , , , , , , , , , , , , , , , , , , ,

Post navigation

Leave a Reply

Your email address will not be published. Required fields are marked *