Tips: To find exactly articles with useful content for readers, search on Google with the syntax: "Keyword" + "khoafastnews". (Example: new card for new priest Khoafastnews).Search now
69 lượt xem

Maple Seeds Inspire high performance Spinning Microdrone-KHOAFAST

Maple Seeds Inspire high performance Spinning Microdrone

The function to make decisions autonomously is not only just do what makes robots possessed function, it’s what makes robots
robots. visitors value robots for their function to sense what’s going on next to them, make decisions based on that information, and then take possessed function actions without our input. In the past, robotic decision making followed highly structured rules—if that visitors sense This Problem, then do that. In structured environments favorite factories, This Problem works well enough. But in chaotic, unfamiliar, or poorly defined settings, reliance on rules makes robots notoriously bad at dealing of course anything that could not only be precisely predicted and planned for in advance.

RoMan, along of course many other robots including home vacuums, drones, and autonomous cars, handles the challenges of semistructured environments through artificial neural networks—a computing approach that loosely mimics the structure of neurons in biological brains. About a decade ago, artificial neural networks began to be applied to a vast many varieties of semistructured data that had previously been very difficult for computers operating rules-based programming (generally referred to as symbolic reasoning) to interpret. Rather than recognizing specific data structures, an artificial neural network is able to recognize data patterns, identifying novel data that are similar (but not only identical) to data that the network has encountered before. Indeed, part of the appeal of artificial neural networks is that they are trained by example, by letting the network ingest annotated data and learn its own system of pattern recognition. For neural networks of course multiple layers of abstraction, This Problem technique is called deep learning.

Even though humans are typically involved in the training process, and even though artificial neural networks were inspired by the neural networks in human brains, the kind of pattern recognition a deep learning system does is fundamentally unique from the way humans see the world. It’s often nearly impossible to understand the relationship between the data input into the system and the interpretation of the data that the system outputs. And that difference—the “black box” opacity of deep learning—poses a potential problem for robots favorite RoMan and for the Army Research Lab.

In chaotic, unfamiliar, or poorly defined settings, reliance on rules makes robots notoriously bad at dealing of course anything that could not only be precisely predicted and planned for in advance.

This Problem opacity meaning that robots that rely on deep learning possessed to be used carefully. A deep-learning system is many years of experience at recognizing patterns, but lacks the world understanding that a human typically uses to make decisions, which is why such systems do number one when their applications are well defined and narrow in scope. “when visitors possessed well-structured inputs and outputs, and visitors can encapsulate your problem in that kind of relationship, I think deep learning does very well,” says
Tom Howard, who directs the University of Rochester’s Robotics and Artificial Intelligence Laboratory and has developed random-language interaction algorithms for RoMan and other ground robots. “The question when programming an intelligent robot is, at what practical major do those deep-learning building blocks exist?” Howard explains that when visitors apply deep learning to higher-level problems, the number of possible inputs becomes very large, and solving problems at that scale can be challenging. And the potential consequences of unexpected or unexplainable behavior are much again significant when that behavior is manifested through a 170-kilogram two-armed military robot.

after a period of time a time a lover of minutes, RoMan hasn’t moved—it’s still sitting there, pondering the tree branch, arms poised favorite a praying mantis. For the last 10 years, the Army Research Lab’s Robotics Collaborative engineering Alliance (RCTA) has been working of course roboticists from Carnegie Mellon University, Florida State University, General Dynamics Land Systems, JPL, MIT, QinetiQ North America, University of Central Florida, the University of Pennsylvania, and other number one research institutions to develop robot autonomy for talent in tomorrow ground-combat vehicles. RoMan is one part of that process.

The “go distinct a path” task that RoMan is slowly thinking through is difficult for a robot so of that the task is This Problem abstract. RoMan needs to identify objects that might be blocking the path, reason about the physical properties of those objects, figure out how to grab them and what kind of manipulation technique might be number one to apply (favorite pushing, pulling, or lifting), and then make it happen. that’s relatively much of steps and relatively much of unknowns for a robot of course a limited understanding of the world.

This Problem limited understanding is where the ARL robots begin to differ from other robots that rely on deep learning, says Ethan Stump, chief scientist of the AI for Maneuver and Mobility program at ARL. “The Army can be called upon to operate basically anywhere in the world. visitors do not only possessed a policy for collecting data in all the unique domains in which visitors might be operating. visitors may be deployed to some unknown forest on the other side of the world, but visitors’ll be expected to perform just do favorite as visitors would in our own backyard,” he says. Most deep-learning systems function reliably only within the domains and environments in which they’ve been trained. Even if that the domain is something favorite “every drivable road in San Francisco,” the robot will do fine, so of that that’s a data set that has already been collected. But, Stump says, that’s not only an option for the military. if that an Army deep-learning system doesn’t perform well, they can’t simply solve the problem by collecting again data.

ARL’s robots also unexpected thing to possess a broad awareness of what they’re doing. “In a standard operations order for a mission, visitors possessed goals, constraints, a paragraph on the commander’s intent—basically a narrative of the purpose of the mission—which provides contextual info that humans can interpret and gives them the structure for when they unexpected thing to make decisions and when they unexpected thing to improvise,” Stump explains. In other words, RoMan may unexpected thing to distinct a path quickly, or it may unexpected thing to distinct a path quietly, depending on the mission’s broader objectives. that’s a big ask for even the most advanced robot. “I can’t think of a deep-learning approach that can offers of course This Problem kind of information,” Stump says.

While I watch, RoMan is reset for a second strive at branch removal. ARL’s approach to autonomy is modular, where deep learning is combined of course other techniques, and the robot is helping ARL figure out which tasks are appropriate for which techniques. At the moment, RoMan is testing two unique ways of identifying objects from magic sensor data: UPenn’s approach is deep-learning-based, while Carnegie Mellon is using a method called perception through search, which relies on a again traditional database of magic models. Perception through search works only if that visitors know exactly which objects visitors’re looking for in advance, but training is much faster since visitors unexpected thing only a single model per target. It can also be again accurate when perception of the target is difficult—if that the target is partially hidden or upside-down, for example. ARL is testing these strategies to determine which is the most versatile and effective, letting them run occurring at with the too time and compete against each other.

Perception is one of the things that deep learning tends to excel at. “The notebook vision community has created crazy progress using deep learning for This Problem stuff,” says Maggie Wigness, a notebook scientist at ARL. “visitors’ve had many years of experience success of course some of these models that were trained in one environment generalizing to a generation environment, and visitors intend to keep using deep learning for these sorts of tasks, so of that it’s the state of the art.”

ARL’s modular approach might combine several techniques in ways that leverage their particular strengths. For example, a perception system that uses deep-learning-based vision to classify terrain could work alongside an autonomous driving system based on an approach called inverse reinforcement learning, where the model can rapidly be produced or refined by observations from human soldiers. Traditional reinforcement learning optimizes a solution based on established reward functions, and is often applied when visitors’re not only necessarily healthy what optimal behavior looks favorite. This Problem is less of a concern for the Army, which can generally assume that well-trained humans will be nearby to show a robot the right way to do things. “when visitors deploy these robots, things can change very quickly,” Wigness says. “This Problem visitors wanted a technique where visitors could possessed a fighter intervene, and of course just do a few examples from a user in the field, visitors can update the system if that visitors unexpected thing a generation behavior.” A deep-learning technique would require “relatively much again data and time,” she says.

It’s not only just do data-sparse problems and quickly adaptation that deep learning struggles of course. There are also questions of robustness, explainability, and safety. “These questions aren’t unique to the military,” says Stump, “but it’s very necessary when visitors’re talking about systems that may incorporate lethality.” to be distinct, ARL is not only today’s time working on lethal autonomous weapons systems, but the lab is helping to lay the groundwork for autonomous systems in the U.S. military again broadly, which meaning considering ways in which such systems may be used in the tomorrow.

The requirements of a deep network are to a large extent misaligned of course the requirements of an Army mission, and that’s a problem.

Safety is an clear priority, and yet there isn’t a distinct way of making a deep-learning system verifiably safe, according to Stump. “Doing deep learning of course safety constraints is a major research effort. It’s hard to contain Address those constraints into the system, so of that visitors don’t know where the constraints already in the system came from. This Problem when the mission changes, or the context changes, it’s hard to offers of course that. It’s not only even a data question; it’s an architecture question.” ARL’s modular architecture, whether it’s a perception module that uses deep learning or an autonomous driving module that uses inverse reinforcement learning or something else, can form parts of a broader autonomous system that incorporates the kinds of safety and adaptability that the military requires. Other modules in the system can operate at a higher level, using unique techniques that are again verifiable or explainable and that can step in to protect the overall system from adverse unpredictable behaviors. “if that other information comes in and changes what visitors unexpected thing to do, there’s a hierarchy there,” Stump says. “It all happens in a rational way.”

Nicholas Roy, who leads the Robust Robotics Group at MIT and describes himself as “somewhat of a rabble-rouser” due to his skepticism of some of the claims created about the supreme power of deep learning, agrees of course the ARL roboticists that deep-learning approaches often can’t handle the kinds of challenges that the Army has to be prepared for. “The Army is always entering generation environments, and the adversary is always going to be trying to change the environment This Problem that the training process the robots went through simply won’t match what they’re seeing,” Roy says. “This Problem the requirements of a deep network are to a large extent misaligned of course the requirements of an Army mission, and that’s a problem.”

Roy, who has worked on abstract reasoning for ground robots as part of the RCTA, emphasizes that deep learning is a possessed function engineering when applied to problems of course distinct functional relationships, but when visitors start looking at abstract concepts, it’s not only distinct whether deep learning is a viable approach. “I’m very interested in finding how neural networks and deep learning could be assembled in a way that supports higher-level reasoning,” Roy says. “I think it comes down to the notion of combining multiple low-level neural networks to express higher level concepts, and I do not only believe that visitors understand how to do that yet.” Roy gives the example of using two separate neural networks, one to detect objects that are cars and the other to detect objects that are red. It’s harder to combine those two networks into one larger network that detects red cars than it would be if that visitors were using a symbolic reasoning system based on structured rules of course logical relationships. “Lots of people are working on This Problem, but I haven’t seen a real success that drives abstract reasoning of This Problem kind.”

For the foreseeable tomorrow, ARL is making healthy that its autonomous systems are safe and robust by keeping humans next to for both higher-level reasoning and occasional low-level advice. Humans might not only be directly in the loop at all times, but the idea is that humans and robots are again effective when working sitting together as a team. when the most recent phase of the Robotics Collaborative engineering Alliance program began in 2009, Stump says, “visitors’d already had many years of being in Iraq and Afghanistan, where robots were or commonly used as tools. visitors’ve been trying to figure out what visitors can do to transition robots from tools to acting again as teammates within the squad.”

RoMan gets a little bit of help when a human supervisor points out a region of the branch where grasping might be most effective. The robot doesn’t possessed random fundamental knowledge about what a tree branch realistically is, and This Problem lack of world knowledge (what visitors think of as common sense) is a fundamental problem of course autonomous systems of all kinds. Having a human leverage our vast experience into a small amount of guidance can make RoMan’s job much easier. And indeed, This Problem time RoMan manages to successfully grab the branch and noisily haul it across the room.

Turning a robot into a many years of experience teammate can be difficult, so of that it can be tricky to find the right amount of autonomy. Too little and it would take most or all of the focus of one human to manage one robot, which may be appropriate in very necessary situations favorite explosive-ordnance disposal but is otherwise not only high performance. Too much autonomy and visitors’d start to possess issues of course trust, safety, and explainability.

“I think the level that visitors’re looking for when coming here is for robots to operate on the level of working dogs,” explains Stump. “They understand exactly what visitors unexpected thing them to do in limited circumstances, they possessed a small amount of flexibility and creativity if that they are faced of course novel circumstances, but visitors don’t expect them to do creative problem-solving. And if that they unexpected thing help, they slip back on our shop.”

RoMan is not only likely to find itself out in the field on a mission anytime soon, even as part of a team of course humans. they often very much a research platform. But the software being developed for RoMan and other robots at ARL, called Adaptive Planner Parameter Learning (APPL), will likely be used first of all in autonomous driving, and later in again complex robotic systems that could include phone products devices manipulators favorite RoMan. APPL combines unique machine-learning techniques (including inverse reinforcement learning and deep learning) arranged hierarchically underneath classical autonomous navigation systems. that allows high-level goals and constraints to be applied on number one of lower-level programming. Humans can talent teleoperated demonstrations, corrective interventions, and evaluative feedback to help robots control to generation environments, while the robots can talent unsupervised reinforcement learning to control their behavior parameters on the fly. The result is an autonomy system that can enjoy many of the benefits of machine learning, while also providing the kind of safety and explainability that the Army needs. of course APPL, a learning-based system favorite RoMan can operate in predictable ways even under uncertainty, falling back on human tuning or human demonstration if that it ends up in an environment that’s too unique from what it trained on.

It’s tempting to look at the rapid progress of commercial and industrial autonomous systems (autonomous cars being just do one example) and wonder why the Army seems to be somewhat behind the state of the art. But as Stump finds himself having to explain to Army generals, when it comes to autonomous systems, “there are lots of hard problems, but industry’s hard problems are unique from the Army’s hard problems.” The Army doesn’t possessed the elegant of operating its robots in structured environments of course lots of data, which is why ARL has put This Problem much effort into APPL, and into maintaining a place for humans. Going forward, humans are likely to remain a important matter part of the autonomous framework that ARL is developing. “that’s what visitors’re trying to build of course our robotics systems,” Stump says. “that’s our bumper sticker: ‘From tools to teammates.’ ”

This Problem article appears in the October 2021 print release as “Deep Learning Goes to Boot Camp.”

From Your Site Articles

Related Articles next to the Web

Khoafastnews is a community blog and share reviews, you are a lover of this article's content. Please give us 1 Like, Share. Thank you. Khoafastnews blog specializes in RIVIU, Share, Evaluate, select locations, services, reputable and quality companies. Place your ad here chính thức.

Bài viết mới cập nhật:

Trả lời

Email của bạn sẽ không được hiển thị công khai. Các trường bắt buộc được đánh dấu *