How Engineering Robots Works: Crash Course Engineering #33


Someday in the near future, you might be
wandering around town at night when out of the
blue you’ll stumble across a robot. If that happens, maybe you shouldn’t be
too surprised. Not so long ago, robots were mostly found
in the realms of science fiction. Today, they’re vacuuming floors, building
cars, and even roaming the surface of Mars. Robots are still pretty far from having a human level
capacity of intelligence, or even dexterity for that matter, but they’re already a big part of many
branches of engineering. In the US, over a hundred thousand robots
have been added to factory floors since 2010. Knowing what exactly robots are, what they
can do and how they work is more important
than you might think. Because soon, they might be living among us. [Theme Music] The classic picture of a robot is something
with human-like intelligence, and maybe even
a humanoid appearance. But like so much else, real-life robots aren’t
much like what you usually see on TV. Robots come in all shapes and sizes, and
they can be built very differently depending
on what they’re used for. For example, some of the robots used in mining
are made from a camera, mounted on a small
chassis with wheels. That allows them to enter and inspect mine
shafts, and even retrieve leftover material from
places it’s hard to get people in and out of. Meanwhile, in medicine, specialized robotic
arms, with a bit of human assistance, can perform
precise surgeries through the tiniest incisions. There’s also a difference between robotics
and artificial intelligence, or AI, which
people sometimes confuse. While some of the concepts are similar,
robotics deals with a specific set of ideas,
although it does borrow a few from AI. AI deals broadly with the goal of automating
decision-making for complex tasks – the kind
you can’t write a simple set of rules for. That could be everything from playing chess
to driving cars. Right now, AI systems tend to have very narrow
goals. But the holy grail of AI is to develop a system
that can make intelligent decisions about any sort
of task using different sources of information. Our focus will be on robots, which are designed
for more specific purposes in the physical world. In engineering terms, a robot is a machine
designed to interact with its environment, make an
appropriate decision based on those surroundings, and then carry out the jobs related to its
goal, all automatically. Which means a true robot doesn’t require
a human controlling exactly what it does. In general, the field of robotics also deals
with machines that do all of the same things, but might require a human operator to carry
out some tasks. Whether fully automatic or not, virtually
all robots have a few features in common. First, robots are machines made from
materials that occupy physical space, so they’re
more than just lines of computer code. That’s one of the key features that distinguishes
them from an AI, although computer engineering
is important for robots as well. Second, most robots have some way of sensing
features of their environment, usually by measuring
light, sound, or force feedback. They can then use that information to make
a decision about what to do next, depending
on what they’re designed for. The tasks robots are built for tend to be
complex, requiring a sequence of different
motions. So an automatic door that opens or shuts in
response to whether you stand in front of a
sensor wouldn’t be considered a robot. The tasks that robots handle are more
sophisticated, like welding two intricately-shaped
pieces of metal together. Third, to interpret the signals they receive from their environment and coordinate some sort of response, robots have computers built somewhere into their design. These work just like any computer, taking
inputs and delivering outputs. But unlike most ordinary software, the software on
a robot’s computer generates electrical signals that
are directly passed on to the robot’s hardware, instead of just changing information in a
file or on a screen. That also requires a power source. One option is to physically hook up to the
power grid using wires and a socket. For example, the 2013 version of the Atlas robot developed by robotics company Boston Dynamics had a humanoid design that could walk and carry objects, much like humans do. To operate, Atlas relied on a cable that tethered
it to a power supply. But tethering robots like this limits how
far they can move and interferes with the
robot’s mobility. So to let Atlas move around more freely,
engineers installed a battery on the robot’s
structure so it could power itself. Unfortunately the materials of a battery tend
to be rather heavy, which posed its own challenges. If we don’t want robots like these toppling
over all the time, it takes a bit of mechanical engineering knowhow
to work out where to position the battery with respect
to the robot’s center of mass. You might need some chemical engineering in
the design of the battery, too. In the case of Atlas, engineers gave it a
hydraulic pump in its torso to help it support
the extra weight of its lithium-ion battery. And now, only a few years later, it can do
backflips and parkour. More generally, some electrical engineering
also has to go into wiring the signals from the
computer program to the robot’s physical parts. Which brings us to another common feature
of robots: they have mechanical parts, like grips or
wheels, for carrying out whatever physical
tasks they need to in their environments. That might be turning a lever, picking up
an object, or just moving somewhere else. And there are some unique challenges to designing
those parts. Consider a robot designed to pick fruit from
trees. From an engineering perspective, it needs
some fundamental qualities: the ability to recognize fruits and distinguish
them from the rest of the plant, navigate its environment to move toward fruits
that need picking, and then to pick them and put them into a
container. Let’s start with how it moves – a fairly
basic requirement for lots of robots. Like with that mining robot, you could simply put
some standard axles with wheels on the bottom of
your robot and attach them to a motor, like on a car. That would be fine if the robot was going to
be operating mostly on smooth, even surfaces,
like roads or factory floors. But most fruits are grown outdoors, sometimes
in rough terrain and difficult environments. So you might need to design adjustable wheels
that change height independently, or add treads
to overcome small bumps, like on a tank. The problem with wheels is that they’re
not very good at overcoming large obstacles. If there’s a fallen branch or a boulder
blocking the way, the robot needs to be able
to climb over it. Giving the robot legs could allow it to jump,
but that has its own problems: robots with
legs tend to fall down…a lot. As the team at Boston Dynamics found when
designing Atlas, programming a robot’s computer to interpret its
environment while handling the dynamics of all
those mechanical parts is trickier than it seems! It really makes you appreciate what a good
job your brain is doing. Of course, to actually make sense of its environment
and find fruit, the robot will need sensors – devices that measure physical characteristics
and translate them into a signal. To find apples, for example, the robot might
have an array of light-sensitive semiconductors, like the kind that make up the light-capturing
pixels in a digital camera, to scan an orchard. But the information sent by the camera sensor is
interpreted by the computer as an array of colored
pixels that don’t mean an awful lot on their own. The average person, can take one glance at
a curvy shape of reddish pixels and instantly
recognize it as an apple. For a computer, that requires a fairly sophisticated
visual algorithm. What’s more, people are good at seeing where the edges
|of objects are, and interpreting the relationships they
have to their environment and how far away they are. You know an apple that looks very small is
most likely one that’s far away. But even a relatively smart computer that
can recognize apples might not know whether it’s a tiny apple only
a few centimeters away or an enormous apple
a few kilometers away. Which would affect whether the computer’s
programming tells it to pick the apple or not. All of these issues are what are known as
computer vision problems. Computer vision deals with how to train software
to take the input data from images or video, like the kind that are delivered from digital
cameras, and interpret it the way a human would. Even once it’s found an apple and moved
itself close to it, the fruit picker is going to
need mechanical parts to actually pick fruit. The main mechanical parts used by most industrial
robots are called actuators and effectors. Actuators are like a robot’s muscles. They convert stored energy into movement. One popular type are electrical actuators, electrical
motors that turn wheels or gears to rotate the robot’s
connected parts with respect to one another. These are the sorts of mechanisms that
would extend the robot’s arms toward or away
from a particular branch. Linear actuators can achieve this by using a
motor to extend the part up and down a thread,
like a nut on a bolt. They can also use compressed fluids, like air or oil, to
extend a part outwards, then use a motor to compress
the fluid and bring the parts back when needed. Effectors, meanwhile are the parts that actually
have an effect on the robot’s environment –
basically, the robot’s hands. To deal with the irregular shapes of different fruits,
you could have what’s called a vacuum grip that
can suck up large objects and hold them in place. But most of the focus on building robots has
been on mechanical effectors – the kind that rely
on tactile feedback and manipulating. In other words, they give the robot an artificial
sense of touch, perhaps with force-sensitive
electrodes on the effector’s surface. Having a sense of feedback is important for applying the
right amount of pressure – otherwise the apple might
slip out of the robot’s grip or be crushed into a pulp. To achieve this, the effector might be a simple
two-part claw, or something more sophisticated with
many parts modeled on a human hand. Picking fruit is the kind of job that robots could
accomplish at scale much more easily than humans,
freeing them to work on other aspects of farming. But robotics isn’t just aimed at saving
labor. Robots can also be used in environments that
are far too dangerous to send humans into. Bomb disposal robots – which are actually more like
drones – are operated by humans to find explosive
devices and disarm them from a safe distance. In the future, fully automated robots might
find uses in other harsh environments like
the deep sea and space. But it’s likely that the place robots will have the
most impact won’t be in the jobs they do instead
of humans, but the ones they do alongside them. Features of robotics are already making their
way into healthcare, like in the development
of prosthetic limbs. But in situations like surgery or disaster
rescue operations, a combination of human smarts and purpose-built
robotic strength could create safer, more efficient,
and totally new ways of doing things. So like many engineering tools, robots will work
best when they weave into our existing methods,
working alongside us to accomplish our goals. Robots might be the future, but it’s a far
cry from the Terminator. In this episode we looked at robots and the
engineering principles of robots. We learned how robots use sensors to interpret
their environment, how actuators and effectors allow a robot to
manipulate the objects around it to accomplish a task, and how computers coordinate the efforts of the two. AR Poster available now at DFTBA.com! Crash Course Engineering is produced in association
with PBS Digital Studios, which also produces
It’s Okay To Be Smart, a show all about our curious universe and
the science that makes it possible, hosted
by Dr. Joe Hanson. Check it out at the link in the description. Crash Course is a Complexly production and this
episode was filmed in the Doctor Cheryl C. Kinney
Studio with the help of these wonderful people. And our amazing graphics team is Thought Cafe.

, , , , , ,

Post navigation

82 thoughts on “How Engineering Robots Works: Crash Course Engineering #33

  1. (a Monty Python sketch)
    Me as a TV host: [whispering to Camera] "They did surgery on a-" [is shot in the face]

  2. Do you know that the first mention of the word Robot was in Karel Capek’s play R.U.R.? He needed a word for machine humanoid, his brother thought about the word robota = to work and voilà!

  3. I think I hate the English speaking people, well not really I think it's more of people from England. Again not really, I think no matter what you do if you use and England accent, it always sounds so great which draws me in. Wonderful job.

  4. I feel like using and old laptop, as the brain of a robot would be better. giving it AI, the computer allows it to think on it's own, would make doing hard job easier also allowing for more unemployment.

  5. you pretty much described living animals, change a few words here or there and seriously, you pretty much just described living beings.

  6. I'm always ever going to only think of androids of Armitage the Third even though that kind is probably another 200 years from now.

  7. This whole episode is robot an A.I. propaganda designed by an self aware YouTube algorithm trying to increase investment in robotics so it can more easily take over the world

  8. Science and learning have always been fun for me and CrashCourse is a great source of knowledge. The idea of spreading knowledge has inspired me to spread my own knowledge to those who care to watch and maybe learn. Stop by my channel and comment suggestions for some new animated short films about any topic.

  9. Better is "from driving car to playing chess"… It took me A LOT fewer amount of time to drive a car than to play chess at an advanced level (not even at a master level).

  10. I enjoy your engineering courses very much! Thank you so much!
    I would really like to work in this field. But I am not sure which major is the most suitable, engineering or computer science?

  11. I do wish I would’ve paid more attention in engineering class at school and go on to study something akin to this, rather than what I ended up studying.

  12. We've always considered both autonomous physical systems and simple manipulators as robots. A robotic arm is still a robot, whether being manually controlled or on autopilot. I'm not even sure robots strictly require inputs besides position feedback.

  13. My high school had a very weak science program. Our "robotics" club was really for the building of glorified remote controlled cars a la Battlebots. We had to meet after school with limited resources, learned very little, and only ever built one competitive model with a ton of help from our sponsor.

    Ah, the good old days.

  14. I just found out now that "Actuators" are an actual thing, and not just a different way to pronounce "Activators". Mind = Blown

Leave a Reply

Your email address will not be published. Required fields are marked *