Marissa Mayer at Google Press Day Paris


MARISSA MAYER: I’m excited to
be here today to talk to all of you about search, past,
present, and future. Even if it’s a bit of a
rainforest scenario here. And when we think about search,
there’s a few key components that make up
the search experience. I want to spend a little bit
of time talking about the framework, and the lens by which
Google considers search. And there’s basically
four key components. The first component is
comprehensiveness, how much information are you
searching over? Because it turns out the quality
of your answer ends up getting better as the set of
information gets bigger. Makes sense, right? You can’t find an answer
unless you have access to that answer. So by including more
information, the number of questions and the quality
of the answers you can provide goes up. And that’s comprehensiveness. So we’re constantly pushing to
make Google more and more inclusive of content
for the sake of improving our search quality. The next component
is relevance. Apples, oranges, and grapes,
which go first? Oranges, apples, or grapes,
that’s pretty much what we’re doing with regard to websites. Which website should go first,
which should go second, what’s the best answer we can provide,
what’s the second best answer we provide. So relevance really comes down
to a question of ranking and ordering the results. How can we order the
results in the most precise way possible? Speed. Speed has actually been one of
the hidden levers that has really driven Google’s growth. By making search simple, fast,
and convenient, it ultimately means that our users are willing
to do more searches. So we’re constantly improving
and working on making Google faster and faster so the users
that we do have end up using the search engine more for
searches they might not have otherwise taken the
time to do. And finally, there’s the
user experience. How are the search
results laid out? Do they make sense? Are the easy to understand
and interact with? Those are all things that
go into the user experience part of search. Is it easy or hard to understand
what the answers are and what the layout
of the page is. And it’s these four components
that work together, comprehensiveness, relevance,
speed, and user experience, that create the entire
search experience that you have on Google. And with that framework in mind
we can take a look at where Google’s been in the past,
where we are today, and where we will be
in the future. So, the early days
that Google. In the early days at Google,
the web looked like this. And it was small enough that
it all fits on one slide. Not really, but the idea
is it is very small. And, interestingly, this week
marks my eighth anniversary at Google, and I remember the week
that I started the search engine had thirty million
pages in it. If you compare that to today
with the tens of billions of pages that now comprise the
Google search engine, we’ve actually grown more than a
factor of a thousand in eight short years in terms
of information. So the information on the web
has grown and grown, and you’ll see this figure sort
of slowly span out. And as that information has
grown, it actually has increased the demand
for search. In the early days of the
Internet, there were so few websites, you could actually
organize them in just a list. People could just look at a
list of websites and click through them, or they could
look at a directory of web sites like Yahoo did. You could actually organize
the websites by hand. But as the web exploded in
terms of the amount of information available, search
became an important and necessary tool in order to
actually find information. So we constantly have been
working too crawl and include more and more content in the
Google search index. And this is the logo that
appeared on June 26th, 2000, Giga Google. Computer scientists call
a billion giga. So we were very excited because
in June of 2001, one year after I started, we
actually had grown to one billion webpages. And on this day, it was a big
advance for us because we were bringing the search engine to
five times the size of what it had been for us, and it was
about 50 percent bigger than the next largest
search engine. The next largest was about
600 million pages. Our website was a billion. So the way you can think about
that is we could actually find 50 percent more answers
than other search engines could find. And that’s the benefit of that
kind of comprehensiveness. So this is the launch
of Giga Google. But, of course, increasing
comprehensiveness actually has a flip side to it, which is as
you include more and more information, relevance,
and ranking gets harder and harder. You go from just having, say,
four results, to suddenly having forty, and now the order
that the results appear in is actually more and
more important. So what I’ve done here is
pictured our search result page as it appeared in 1999
or early two 2000. And I’ve shown it in a
screen for a reason. We ultimately realized that in
the early days of search engines users got used to sort
of hunting and pecking through results, clicking the next
button, going through the first few pages to try and
find the first results. We realized that our job was
to try and provide the best result first, and that if we
did our job well the right result or the best result might
be in the first one or two websites. People wouldn’t have to press
next, they might not even have to scroll down. And it was that commitment to
delivering the best result first that ultimately led to
things like the I’m feeling lucky button, which, of course,
is translated into lots of different languages
here in Europe, because we felt that our goal was to
ultimately produce the best result, in the first position,
on the first webpage. And that commitment to relevance
benefited us in one unique way, which is it was that
relevance, that magic of getting just what you wanted in
the first result, that led to a lot of word of
mouth spreading. Google has never really
advertised our search service. We joked that in early 1998, we
realized we had users, we had searches coming in
to the search engine, we could see that. But we basically realized that
the users were us and our mothers, Larry, and Sergey,
and Craig– –in terms of speed, and
throughout the evolution of Google, if think about how
you answered questions twenty years ago. You’d go drive to the library,
think of a question, go and find reference books for it,
journals, look things up, and find an answer. And what that ultimately meant
was a lot of questions went unanswered, because you’d just
decide it wasn’t important enough to spend a half
an hour or an hour researching that question. So, over time, things have
evolved, sometimes you’d ask a friend, that’s slightly faster
than going to the library if you can find the right friend
and they have that piece of information. And then, with Internet search,
what we saw happen was people could get
their questions answered in under a second. And that kind of snappy
responsiveness is ultimately what caused Google to grow. We we are continually trying to
improve our speed, and that improvement has yielded more
and more searches. At some point were going to be
up against the constants of physics, but our goal is to
have a Google search be as fast as a light beam to and
from our data centers from your location. So we’re really excited about
speed and what that can do for the search experience, and also
in our past, how we’ve evolved and pushed on that as
a main lever for growth. So I think what I should do– –10 minutes or so, because I
think the questions all of you asking are in many ways more
interesting and more fun than some of some of these things. But I’ll quickly go over at
least some of the announcement pieces in, particular some
of the future work. So this is what I was talking
about in terms of speed, and the way that Internet search
really has changed the way that people interact with
search, because it’s so much faster. Archie was one of the original
Internet searches, where you actually sent an email in asking
for information, and it emailed you results
back a day later. And you could actually pass
queries through email. And there is Google search, and
as I said were basically getting to the point where our
goal is to be bounded by the speed of light, how quickly can
we answer a user’s query, where we’d actually be hitting
up against that physical constraint. And were obviously always
working to optimize the speed queries, especially here in
Europe and all around the world, to make sure we’re
answering our users’ queries as quickly as we can. This is the evolution of the
homepage over the past eight years, with basically one
screen shot per year. I think the homepage is an
interesting item to watch evolve because one, one reason
our homepage has been so clean is because it’s just fast. It
loads fast, it’s no nonsense. It tells people that our focus
is on search, and that’s the core of what we do,
that’s the core of what’s on the home page. It’s also is a bit of dumb luck,
in that our homepage was this simple to start with
because Sergey didn’t know very much HTML, so he didn’t
have the patience to create a very complicated page. But over time, it’s become a
statement about our user experience, that we put search
first, we put our users first, and that we focus a
lot on latency. In terms of search today at
Google, how does it look, and how does it work. This is the search
result page. You’ll notice it’s not very
different than what you saw as a search result page from
a long time ago. In fact, as the VP of search and
user experience, one of my jobs is how does the search
engine look, how does it work. And my friends and family give
me a hard time, because they say, well if it’s your job, how
does it look, how does it work, it looks the same,
what do you do? Which is an interesting
question, but I think the idea here is over the years we’ve
interlaced a lot of new functionality, and a lot
of complication. And the fact that the search
result page has remained clean, and simple, and very user
friendly in spite of all of those advances, is a
testimony to our commitment to user experience. So I talked a little bit during
the Q and A session about approximating
intelligence, and trying to understand a user’s intent. What’d they type, and
what did they mean. And over time, we’re getting
closer and closer to this. There are things like our spell
correction, which here you can see someone typing
something like Brittany Spires, where spires is a real
word, and you actually see webpages that have both Britney
and spire on them. But those aren’t the
webpages you want. Those our webpage by people
who actually think Britney Spears is spelled that way. They’re strictly inferior pages
to the ones you get if you actually spell her
name correctly. So, offering things like did
you mean as a correction. We’ve also offered something
that we refer to as alternate queries. This is what I referred
to in the Q and A as the survivor example. By looking at users career
visions over time and analyzing those as well as the
web content itself, we can basically understand, or seem
to understand, that survivor the television show isn’t on
the network ABC in the US, it’s on the television
network CBS. So by looking at all that data,
even though there’s not necessarily a human editor in
place there, it understands the semantics, or it appears to
understand the semantics of the survivor television
show, how it works. And we’ve also taken those
refinements and began to apply them to really broad queries. So if you type something like
television into Google, we also offer searches related to
television on the bottom of the page, that help you refine
what you meant, the invention of television, television
facts, are effects, so on and so forth. Site links is something
interesting as well. Here what we found, as our
commitment to try and bring user’s as close to the
information as we possibly can get them, so if you do a broad
query like the BBC, we’ll give you the BBC homepage as
the first result. We often find that when people
go to the BBC, they are looking for three or five top
level pages, news, sports, weather, and what have you. And rather having you fumble
for that information after doing a search on the BBC site,
you ultimately can go right there with one click from
our page using a feature like site links. I want to spend a bit of time
talking about some of the advances we’ve made in
language of late. So there was a feature that
was rolled out by our engineers in our Haifa research
center, which does transliteration, and makes it
possible to type in different languages based on how
your keyboard is set. So what you can see here is if
you’re using a Roman English character set, and that’s how
your keyboard is set, you can still enter searches in
Cyrillic for Russian. So it does the transliteration
and figures out what word you must be typing. And similarly it works here if
your keyboard is set to type in Hebrew, it actually
translates it into English. So you can just keep typing
without having to change your keyboard back and forth. It makes it easy to enter in
different searches, regardless of your keyboard settings. There’s also the challenge of
how do we provide the best possible result relative to
where you are, because there are some queries that
are ambiguous. So, Cote d’or, is that the right
way of announcing it. the region here in France? I’m sorry if I’m actually
fumbling on the pronunciation, but Cote d’or in Australia
is a very popular kind of chocolate, so when you do the
query Cote d’or in Australia, we provide one English and
French result, which is the Cote d’or website. It turns out the candy company
was bought by Kraft, and actually in the result set,
we have the Kraft website. And not just any Kraft website,
the Australian version of the Kraft website,
because these results are angled towards Australian
users. But here in Europe, if you’re in
Belgium, and you type Cote d’or, we think well that user
might mean the chocolate, or they might mean the
region in France. So the first result is the
chocolate brand, the rest of the results are written
in French about the region in France. And finally, if you runs this
query from Google.fr, what you’ll see is all the results
are in French, and they’re all about the region, not about the
chocolate, because we know that, based on his past
behaviors, that French users are usually when they type
Cote d’or looking for the region, and they’re looking
for these webpages. So, let’s talk a little bit
about future, and what some of the initial steps are that we’ve
made towards the future, and also how we see that
evolving over time. Continuing on that language
theme, there’s a really exciting prototype that we
rolled out last month which is called cross language
information retrieval. Information retrieval is a fancy
way of saying search. So this is cross-language
search. We’ve been investing very
heavily at Google for some time in translation technology,
automated- translation technology. Given a webpage, or given a
piece of text, can we use machines to translate it
into another language. And we have a very good
translation engine. We’ve been entering it into
international competitions, and it’s very good. It’s often found to
be best of breed. So the idea is to take a
person’s search, translate it automatically into other
languages, search all languages for answers to that
query, return the results, and then translate them back into
the native language. So the end user, they typed in
their query in a language they understood, and they got results
in a language they understood. So here’s a diagram of
how this would work. And in and our first attempt,
we’re actually having the users declare language pairs. So here, the user says, “my
language is Arabic, and I want to search results in English.”
So they can type in Arabic the equivalent of restaurants in New
York, we translate that to restaurants in New York, search
English webpages for restaurants in New York, and
then use our automated translation engine to translate them back into Arabic. So what the user sees is they
typed the search in Arabic, and they got Arabic results. This is a hugely powerful
technology. When you think about the idea
that a fact or an answer could be written in any language
and you could find it. It’s particularly empowering to
languages that have only a small of content on the web. To date, about one percent of
the web is written in Arabic. So this type of translation
technology can actually unleash all the power
of the information stored in those webpages. Here’s another example, things
like typing tests of words per minute, where again they can
query it in their native language, get results in their
native language, while searching English webpages. This is an example of what our
whiteboards look like when we talk about relevance, and search
algorithms. And, the idea and the spirit
here is building towards universal search. There’s lots of complication
happening behind the scenes. Our goal is to make sure that
the user experience on Google is easy, that it’s simple,
and straightforward. And over time, as more and more
content has come online, we went about building lots of
different search engines. So we had our main web search
engine, and then we got the idea to search images, so we
built image search, and then we realized that we should
build a search engine for news, and one for books,
and one for video. And in the end, we have a lot of
very good search tools that give good answers for our users,
but there’s so many search engines it’s almost
impossible to know which search engine to use. And that isn’t straightforward,
and that isn’t particularly
user friendly. You almost need a search engine
to understand which search engine to use. And so the idea is that we want
to break down all those different barriers, and bring
together one unified product, and that product is
universal search. One search box, you go there,
you use that search box, it’s the default search
on Google.com. And you’ll get embedded in
your search, not only webpages, but images, books,
news, local, and video. And this is just the first
step in universal search. There’s many more corpora for
us to include, blog search, scholar, and so on
and so forth. But, it’s an important first
step to take all of those different types of data and mix
them into the result set. So, in terms of examples, Steve
Jobs is one of the icons I like to follow. I just think he’s fascinating. When I did my first queries
on universal search, I was electrified at what it did,
because it took a very standard set of web results and
presented on Steve jobs images about him, news about
him, a video of him giving the commencement address at Stanford
university two years ago, and news archive articles
that chronicled his career, all integrated into
one result page. No more do you have to go to
five or six different places to try and find the
best answer. We query all of those different
sources with universal search, and bring the
best answers back to you. And it really provides for a
much more rich experience. So if you look at things like
Nosferatu, which is the original monster movie. We’ve always had
a good result. There is an IMDB result there
that gives you the meta information about the movie, who
directed it, who acted in it, how long is it, what’s
the plot line. But ultimately, you don’t get a
sense of what Nosferatu is. Now, by bringing in videos,
which you can see here is the third result, it turns out
Nosferatu is actually on Google video, not just a
clip, the whole movie. And through a universal search,
we have the movie right here including a watch
video link, and you don’t actually just have to read about
Nosferatu, which is what you had to settle for before
universal search. You can actually watch and
experience the whole movie, and get a real sense
of what it’s about. There’s also exciting items
like historical data. So, for example the query
I have a dream. In the past it’s been easy to
turn up the text of the Martin Luther King speech, but
ultimately being able to watch the speech and watch a videotape
of that is a much more powerful answer
to that search. Universal search is hard. In order to roll it out we had
to clear a bunch of different challenges and make a bunch
of changes on our site. The first is that in order to
actually gather all those results, do the query on images,
do the query on books, do the query on news. Why haven’t we done
that do date? Because it’s just been
too expensive. So we needed a new
infrastructure, which we rolled out last month
to support this. We also needed new ranking
algorithms because, how do you compare a web result and a news
results, or news result and a video result? How do you rank them relative
to each other and understand which one is more relevant? It’s very hard. So our search quality team has
worked on building new scoring algorithms to help us understand
how to rate these disparate types of content. And finally there’s displaying
results. How do you display them? Should you group all the images
together, should you group all the videos together? We ultimately settled on a rank
ordered list, but we’ll be doing more experimentation
with this in the future. So that’s universal search. And then the final slide is on
personalization, which we had lots of questions about
in Q and A and also afterwards on this. To date we’ve offered up a
really powerful product which is personalized search. What personalized search does
is it takes your search history, your queries and your
clicks, and it customizes searches to that information. It’s completely transparent. You’re able to see the search
history we’ve collected on you through an easy link on
the top of the page. It’s also opt in. You have to elect to use the
service in order to achieve personalization of your
search results. But that’s been our core
product to date and our core offering. There are also new avenues we’re
looking at, like our personalized home page, iGoogle,
which is proving very popular, which is another way
for us to bring together different types of data
and understand more about our users. And I think the really powerful
part of all of this is, based on the information
that the users have given us, the way they’ve taught us about
their preferences, we can take information from search
history and iGoogle, and ultimately work towards
the search engine of the future, which is one that
understands the preference’s that you’ve expressed to it. So if you want to understand
where you’re located, or what type of searches you do, or
what you click on, you can participate personalized
search and iGoogle, and ultimately get results that
are customized to you. And that is the end of my
presentation, and I will take just a few questions. FEMALE SPEAKER: So if anyone’s
got to any questions which they didn’t ask before, and they
wanted to ask some, we have the roving mics,
to get on with it. I’ll give you my mic. AUDIENCE: I’m [UNINTELLIGIBLE] from [UNINTELLIGIBLE] in Finland, which part of the
documents do you translate when you translate them? You translate the headline, of
course, in the search results, but then again if you want
it to be useful you need translate something
else as well. And then, the other question,
do you have some copyright issues here when you actually
our translating somebody else’s copyrighted material and
then re-publishing it in a different language, that’s
basically what you do, right? MARISSA MAYER: So what we do
is we translate your query into the target language, so
in the example we translate the query from Arabic
into English. Then we search in English, we
bring those results back, and we translate the titles, the
blue links, as well as the snippets, to give you a sense
of is this result useful for you or not. And then when you click on a
result we’re able to direct you to a special page and a
special URL that takes that content and in place
translates it. So you can actually see the
webpage as if it was written natively in that language. And we believe that our
implementation is compliant with copyright laws. We’ve obviously had our lawyers
review that, and the initial feedback on this
first prototype has been quite excellent. People are really excited about
the idea of being able to search content that wasn’t
necessarily written in the language they understand. FEMALE SPEAKER: The gentleman
in the shirt, there. AUDIENCE: Yes, hello,
Christopher Alex from The Libaracion in Paris. You talked a lot about the
opting option, if you want to personalize your Google use, and
you said so that the limit where you keep the data of the
people is 18 months, and it’s because you don’t want
to keep the data. So you said it’s a compromise,
but so as you said that with an explicit agreement of
the people you could keep the data longer. So my question is, is
the [UNINTELLIGIBLE] program, will it
[UNINTELLIGIBLE] to keep the data of the people, I don’t
know, for five, six, ten years, and, secondly,
what would be the interest of the user? MARISSA MAYER: Sure. So need to be clear about what’s
currently launched and some of this was just
speculation for future, part of the question, and
part of on mine. So what we’ve launched to date
is we have personalized search, and it’s
an opt-in form. So you decide if you want to
participate in search history and have your search results
personalized. Right now, that data is
anonymized after 18 months. There’s some possibility in
the future we would allow people to do an opt in that
would allow us to keep the data for longer, though that
is not what’s currently in place, and the current opt in
does not keep the data longer than 18 months because users
haven’t explicitly agreed to have their data kept
longer than that. So that would be a separate
program we would role out in the future. And in terms of the interest of
the user, it’s an element around personalized is
as personalized as you want it to be. And I do think the more data we
have, the better we can do in terms of relevance, and
ultimately providing for those users needs. So, it’s possible if we had more
information we would be able to do a better job with
search results, but that’s a trade off that the
user should make AUDIENCE: Yes,
[UNINTELLIGIBLE PHRASE] in Germany. How important is geographical
tagging in your videos and images for you perhaps
in the future? I think you made an acquisition
in that direction. MARISSA MAYER: Tagging right now
for images and videos is incredibly important. Because we’re a text-based
search engine we ultimately need to have text in order
to do those queries. I think as you can see we’re
working very hard to make sure that the results we offer in any
particular language or any particular country
are suitable to that language or country. So as we evolve things like
universal search, it’s important for us understand
if a video is particularly relevant, or an image
is particularly relevant to a locale. So, geographic data, geographic
tagging of images and videos isn’t particularly
important right now. I think as we look at rolling
out universal search, and really enhancing its relevance,
that is something that will become increasingly
important. FEMALE SPEAKER: The gentleman
in the green shirt there. AUDIENCE: David Smith, of
The Observer in the UK. Even those couple of examples
you showed there, I noticed on the universal searches Wikipedia
figured very highly. Just anecdotally I’ve heard
people sort of complaining about that, why does Wikipedia
always get such prominence? It just made me wonder,
particularly when news as well is being fed into it, is there
at least discussion to be had about the reliability and
authenticity of the information thrown up? For example, searching for news
stories, could or should the BBC, who we assume are quite
reliable, take priority over somebody’s blog entry? MARISSA MAYER: Sure. I think that our overall view
is that Wikipedia is a very amazing phenomenon, and a great
product, and it has a lot of really great
information. And we, rather than trying to
make individualized judgments about particular sources, like
Wikipedia, we rely on automated methods, like
around page rank. It turns out, that page rank,
and a sense of importance, is often a user-driven
phenomenon. So what we’re seeing with
Wikipedia is that when there’s a page that’s particularly well
done, or particularly authoritative, people, when
they mention that concept, like the Eiffel Tower, they
may link to the Wikipedia entry on that. And that link feeds into page
rank, it feeds into things like anchor text for the link,
which we use as a signal into our search quality function. So those types of links do a
lot to buoy up Wikipedia. But it’s happening because
people like the content, and are linking to the content, and
they’re finding it useful, and that’s why you’re seeing
it in the search results. AUDIENCE: Hi. Two short questions. First one– FEMALE SPEAKER: Can we have
your name and news organization? AUDIENCE: Christopher
[? Ekolbic ?] from IDG Poland publishing
house. Two short questions, first one,
do you fight, in a way, we’ve so called Google
bombed, probably you know I’m talking about. And another question is however
it’s hard to imagine functionality of sorting search
results in the regular search, that in an image search,
sorting, for example, by resolution would
be very useful. Right now, you have this is this
is medium, this is large or a small file, but this is not
enough, for example, for journalists to work. MARISSA MAYER: So for image
search what we actually have is a way of restricting two
particular resolution sizes, so it’s hard to find sometimes,
but there’s a pull down the top of the page that
allows you to restrict to small, medium, large, or extra
large, which are suitable for wallpaper images, and we
encourage people to use that. And it is a common request, so
I do think that we need to make that more prominent. And the first question was,
Google bombs, so Google bombs are a phenomenon that occur
from time to time that essentially happen because
someone sets out to trick the search engine, and if you take
two commonly occurring words that occur frequently on the
web, or in the English language, or any language, but
don’t occur frequently on the web, and put them together,
like miserable failure. What happens is it’s possible
because miserable failure hasn’t occurred on the web that
much together, if you have a lot of links all pointing
in one place, it causes the phenomenon where
a result turns up. And we have worked very hard to
develop algorithms that are capable of detecting when a
Google bomb is happening. It’s hard, because when you look
at blogging behavior, for example, it looks sometimes
like a Google bomb. Lots of people all setting links
with the same phrase pointing in one place. So it’s hard to tell, but
interestingly, the few Google bombs that have happened, and I
only know like five or less, have all become quite famous. And so interestingly now when
you hear about them, when you hear about miserable failure,
our first result is actually the right one, because it
usually is what people are looking for. So it’s sort of a
self-fulfilling prophecy. Because the Google bomb happens
and then it gains notoriety, it ultimately becomes
the right result for itself, go figure. FEMALE SPEAKER: I think we’ve
got time for one last question, just the lady
at the front here. AUDIENCE: Hi, Victoria Shannon,
at The International Herald Tribune. I’m just curious how far along
are you on developing algorithms to search video by
image, rather than by text. There’s a European effort to do
the same thing that Google has not yet joined, wonder if
you’re going to do that. MARISSA MAYER: There are
preliminary efforts, but they’re very much in
the research phase at the current moment. And they’re happening at Google,
they’re happening in universities. Trying to understand image
recognition well enough to do a search of that caliber is
something that’s of extreme interest and it’s just
very useful. But they’re not really ready for
prime consumption because they have a lot of error rates,
they are somewhat problematic. I actually think that from my
own personal views, it’s much more likely that we would
develop, we or someone else would develop a good voice
to text mechanism. If you could actually extract
the voice and the words out of a video, and thereby create
a transcript that could be searched, that’s actually a
much more likely way that video would ultimately be
searched, because I think the voice to text recognition that
we have is further along than image recognition. FEMALE SPEAKER: Okay everyone,
that’s great. Thank you very much Marissa,
thank you very much Indeed.

, , , , , , , , ,

Post navigation

11 thoughts on “Marissa Mayer at Google Press Day Paris

  1. shes worked there for like 8 years, you dont have any clue about what she knows and its quite presumptuous to type such things.

  2. She received her B.S. in Symbolic Systems and M.S. in Computer Science from Stanford University.
    From the Wiki

  3. Her presentation is to the press, so she purposely is not too technical. She's smart enough to consider who is in her audience. Maybe, just maybe, she can be a smart (and pretty) female.

  4. She works from 9AM until after midnight. With that kind of no-life-having schedule, she can have that job. And with no time to exercise, she must eat like a rabbit to look like that.

    /hot female execs ftw

  5. No one can focus on Google when she is walking around on stage. She might as well make everyone happy and just do this presentation naked

  6. BTW..(By the way)..if you "search experience" is based on what you search before (i.e. Search history) than ..WHO you will FIND NEW STUFF!!!????? So..somehting that seems that HELPS you..actually LIMITS you..and you will never find out about grapes..because you are ..looking ONLY for ORANGES! I think maybe LEMONS!…Those mathematics gurus have NO CLUES..about real inventions!

Leave a Reply

Your email address will not be published. Required fields are marked *