• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Classifying Environmental Features From Local Observations of Emergent Swarm Behavior

    2020-05-21 05:43:24MeganEmmonsAnthonyMaciejewskiCharlesAndersonandEdwinChong
    IEEE/CAA Journal of Automatica Sinica 2020年3期

    Megan Emmons,, Anthony A. Maciejewski,,Charles Anderson,, and Edwin K. P. Chong,

    Abstract—Robots in a swarm are programmed with individual behaviors but then interactions with the environment and other robots produce more complex, emergent swarm behaviors. One discriminating feature of the emergent behavior is the local distribution of robots in any given region. In this work, we show how local observations of the robot distribution can be correlated to the environment being explored and hence the location of openings or obstructions can be inferred. The correlation is achieved here with a simple, single-layer neural network that generates physically intuitive weights and provides a degree of robustness by allowing for variation in the environment and number of robots in the swarm. The robots are simulated assuming random motion with no communication, a minimalist model in robot sophistication, to explore the viability of cooperative sensing. We culminate our work with a demonstration of how the local distribution of robots in an unknown, office-like environment can be used to locate unobstructed exits.

    I. Introduction

    WHILE individual or small teams of robots have been used for exploration in relatively controlled settings,harsh environments like partially collapsed buildings and underground mines remain an important challenge. Our goal is to leverage the domain of swarm robotics to expand the type of environments which can be reliably explored. In this work we provide a base level for what information can be obtained about features in an unknown environment from a minimalist swarm, i.e., one comprised of very simple, inexpensive robots that contain no sensing or direct communication abilities.

    Inspired by cooperative biological systems like ants and bees, robotic swarms are a relatively new area of robotics research that extend multi-robot systems by incorporating significantly more robots. The increase in robot numbers is frequently countered by a decrease in the individual robot complexity to ensure the entire system is scalable [1] and more easily managed by a human. Like multi-robot systems,swarms can accomplish complex tasks that exceed the capabilities of the individual robots but swarms have additional benefits in exploration applications. The swarm can cover an area more efficiently than an individual robot or small team, and, as we demonstrate in this paper,environmental features can be inferred without requiring robots to explicitly store or relay information, further increasing system robustness and decreasing exploration time.

    Feature inference is achieved using local observations of the swarm distribution. Each individual robot is programmed with a set of known behaviors. Frequent robot-robot and robotenvironment interactions naturally lead to more complex but often difficult to predict emergent behaviors. The emergent behavior can be quantified by different properties (see, e.g.,[2]) but in this work we focus on how the robots are distributed. Hence, there is a correlation between three key factors: individual robot behaviors, environment features, and the observable distribution of robots. If two factors are known,the third can be inferred. If the robot behaviors are independent and the environment is known, a partial differential equation (PDE) can be derived to exactly model the robot distribution [3]. With a finite number of environments, a least-squared error comparison between each derived PDE model and an observed robot distribution can be used to identify in which environment the robots are moving.

    As individual robot behaviors become more sophisticated and environments become more varied, there is no plausible deterministic approach for predicting the robot distribution,but there is still a correlation. In this paper, we exploit the correlation by using a simple, single-layer neural network to demonstrate how known individual robot behaviors and locally observed robot distributions can accurately predict environmental features. We focus on a minimum sensing scenario where the robots are limited to random motion and have no communication abilities. A simple, single-layer neural network is then trained to correlate the number of robots in a central region of the environment with the environment type itself. Despite the limited robot capabilities and local observations, this work shows the distribution of a swarm can be used to quickly and accurately infer environmental features.

    The primary contribution of this work is providing a baseline feasibility study to affirm that environmental information can be obtained from a minimalist swarm where the robots are not equipped with sensors or communication. Current robotic exploration approaches have fundamental problems when applied to disaster scenarios because communication is often unreliable, traditional robots experience high failure rates, and sensing is limited. As such, our results can have significant impact on physical implementations of swarms for disaster scenarios. Rather than trying to design more complicated swarms to overcome these challenges, our work begins by assuming that robots have minimal capabilities. Even without sensing and communication, a local observation of the swarm can be used to infer environmental features in a simulated disaster scenario.

    The remainder of this work is organized as follows. Related work is presented in Section II. We then describe the simulation test platform and introduce the neural network used to correlate observed robot distributions with training environments in Section III. Section IV evaluates the performance of the implemented network. We extend the simulations in Section V to explore the robustness of the methodology with respect to variation in environmental features and swarm size. Conclusions and future work are presented in Section VI.

    II. Related Work

    The sophisticated emergent behaviors of cooperative biological systems in nature have inspired many researchers to focus on recreating observed swarm properties in robotic systems. Properties like group consensus [4], [5], task allocation [6]–[8], and localization [9], [10] have applications in standard exploration strategies. These approaches essentially expand multi-robot strategies, so they face fundamental scalability challenges by requiring global communication and/or localization.

    In response, many works have limited the communication range to local communication between robots as in [11], [12],or [13] but these works still rely on sensory information which is not robust, especially in disaster scenarios, because they are approaching the design as a reduction of multi-robot abilities.By contrast, our work is establishes what information can be obtained from a minimalist swarm. Additional sensors can then be added to build up to the required level of performance as appropriate.

    Several impressive works have been done fully in the swarm domain but these strategies incorporate some form of global knowledge as in [14], are computationally expensive like [15] or [16], or imbue the robots with sophisticated sensory knowledge to construct individual maps as in [17].Unlike these works, our approach simulates robots with no sensory information or communication. We are focused on disaster scenarios where these other works will fail because sensory information is not robust and computational resources are limited.

    Other key contributions to swarm research have focused on a top-down approach: the application is defined so individual robot control strategies are designed to reach the goal using optimization methods (e.g., [18]–[21]). These approaches do produce the desired behaviors but therefore require first knowing what behavior is desired. The top-down approach also obscures fundamental relationships between individual robots and the environment. Instead, we take a unique bottomup approach by exploring what can be done with very simple systems and leveraging the number of interactions much more like biological systems.

    Our work is simulation-based but the robots are modeled such that they do not exceed the physical capabilities of current mobile robots. There are a variety of commercially available mobile robots, including the Khepera [22] and epuck [23] which can be used for small robot teams but are prohibitively expensive for swarm research. The constraint of scalability places additional limits on swarm platforms beyond component cost. Currently the kilobot [24] is one promising research platform because the robots can be managed collectively.

    III. Formulation of Test Environment

    A. Pattern Correlation

    Individual robot behaviors, environmental features, and observable robot distributions are correlated so if two of the characteristics are known, the third can be inferred [3]. In this work, the individual robot behaviors are known, and we want to use the local robot distribution to predict global environmental features. Environmental features are inferred using a simple, single-layer neural network that is trained to correlate local observations of robot distribution with the labeled environment in which the robots were simulated.Individual robots have no knowledge of the environment themselves. Instead, a central agent (human or computer with visual data) uses the distribution of robots immediately around them and a trained neural network to predict global environment properties not visible to the central, independent agent.

    A single input neuron serves as a bias term while each additional neuron in the input layer considers the number of robots in a single observation bin at a given time. In order to reduce the impact of initial robot count on environmental correlation, the raw robot density data is first normalized before being applied to the neural network. The output is the likelihood the observations came from each of the potential environment classes. Logically, the goal of the neural network is to determine the probability of the training environment,C,being from classkgiven an observation of the local robot distribution,x. Mathematically, the desired output from the trained neural network for a specific simulation runncan be formulated as

    where the value ofgk(xn) represents the probability that observationncame from environment classk.

    It is further desired to have the conditional probability output be a function of tunable weights,w, that can be trained to maximize the likelihood of the data distinguishing between allKpotential environment classes. Introducing a functionf(xn,wk)and defining

    ensures that the probability of any given environment class,k,is between 0 and 1, and the probability across allKpotential classes sums to 1 independent of the choice of weights. We then choose

    to create a general softmax formulation [25]. To further understand our choice, we first define an indicator variable,tn,k, for each robot density observation,n, and each potential environment,k, to identify from which environment class observationncame. The environment classes are labelled with an integer value from 1 toK. The indicator variable for observationnis therefore defined as

    We form a measure of how distinguishable the classes are from each other by considering the product of the probabilities for allNenvironment observations, hereby called the data likelihood. Using the class indicator variables for aK-class scenario, the data likelihood,L(w), is expressed as

    Maximizing the likelihood with respect to the tunable weights increases the network’s ability to classify an observation sample. Finding the weight values that maximize the likelihood requires first finding the gradient ofLin (5), but for computational efficiency we instead maximize the natural logarithm of the likelihood

    Substituting (2) back into (6) and taking the gradient with respect to the weights of a single output neuron,j, we get

    Equation (7) is still nonlinear with respect tox, so an iterative update process is needed to find which weights maximize the conditional probability in (2). This leads to a gradient ascent update rule for the weights of the form

    where α is the learning rate. Throughout this work, a constant learning rate of α =0.0001 was used and the weights were updated over 500 iterations, as further iterations did not greatly change the final weight values. These values are not optimized as the focus of this work is not the neural network but rather demonstrating the feasability of using local observations of robot distributions to infer more global environmental features.

    B. Simulated Robot Swarm

    The focus of this work is establishing what features can be determined from a minimalist swarm for applications in disaster scenarios where resources are limited. A specific example use case for this work is locating viable exits in the case of a partial building collapse. To model this hypothetical scenario, we used MATLAB to simulate the robot distribution in one-dimensional (1D) and two-dimensional (2D)environments to model hallways and office rooms. Each simulation consisted of a user-defined environment that contained either a single line ofNinternal bins or a square ofN×Ninternal bins for the 1D hallway and 2D room scenarios, respectively. Boundary bins form the perimeter of each environment and are specified as a sink or a wall to represent an opening or an obstruction in the environment.The use of bins to model the environment with potential obstacles placed at the boundary is inspired by Yamauchi’s occupancy grid approach to map generation [26].

    At every iteration, each robot randomly selects a desired bin that is orthogonally adjacent (no diagonal motion) to its current bin using a uniform distribution so that each potential bin is equally likely. If the desired bin is not a boundary, the robot will move to the adjacent bin at the next time step. If the desired bin is a sink boundary, the robot is removed from the simulation because the sink represents an opening in the environment; however, if the desired bin is a wall, the robot instead stays in its current bin for the next time step. The robot density, that is, the number of robots in every bin of the environment, is stored at each iteration.

    For this work, we considered three distinct environments, or classes, for the 1D scenario and four classes for the 2D scenario. Each 1D environment approximated a hallway withN=10internal bins and a potential exit at the left and right boundaries. The boundary bins were varied such that Class I had a sink boundary at each end so that the hallway was unobstructed, Class II had a wall boundary at each end so that robots could not escape the hallway, and Class III had a sink at the left edge and a wall at the right edge so that robots could only escape the hallway on the left side.

    The 2D environments modeled a square office space consisting ofN×Ninternal bins surrounded by wall boundaries. A set of five consecutive sink bins were placed near the middle of a single wall, spanning bins 5–9, to model an unobstructed doorway. The North Class contained a doorway in the “north” wall, East Class had the doorway in the “east” wall, and so forth.

    We extend our hallway and office metaphors by evenly distributing the robots in the central-most bins of the 1D and 2D environments and observing the density in these central bins at each time iteration. This approach models a person who is in the middle of an unknown environment, equally far from all potential openings, and deploys a swarm of robots in the area around them. The person then uses the observed density of robots immediately around them to predict the location of a viable exit.

    To evaluate the feasibility of using locally observed robot densities and a relatively generic neural network to predict the environment being explored, 200 simulations were run in MATLAB for each environment class in both the 1D and 2D scenarios. The number of robots in each interior bin of the environment was stored at every iteration to form a full population for each simulation. The simulation time in MATLAB increased significantly as the number of robots increased, but 1D simulations typically required just a few seconds to run while 2D simulations often required several minutes.

    IV. Evaluation of Swarm Methodology

    A. OverviewRobot densities from the observed bins at the desired time are selected from the full population for the 1D or 2D scenario to form a complete data set for that environment type. Each row in a data set corresponds to a single MATLAB run. Each column in the data set represents the number of robots in one observation bin. Each column is normalized and then used with the neural network to form a correlation between the number of robots in locally observed bins and the environment being explored. In all cases, 70% of the data set was used to train the neural network with each environment type being equally represented. The remaining 30% was reserved for evaluating the classification accuracy of the trained network.

    B. Performance in 1D Environment6 and 7. The person can only observe the number of robots in these same two bins to model the limited visibility likely in a disaster scenario. To model this scenario, the neural network weights were initialized using a uniform distribution on the open interval ( 0,1) and then updated using (8) with the observed robot density in bins 6 and 7.

    Initially, the neural network was trained using the robot densities observed in bins 6 and 7 at a single time step while the number of robots placed in the environment was systematically varied from just 10 robots up to 1000 robots.The goal of this preliminary simulation was to determine how many robots and what length of time would ensure a sufficient number of interactions to encode environmental features. Two hundred simulations were run for each of the three environment classes creating a data set with 600 rows and just two columns, the first for observed densities in bin 6 and the second for bin 7. Fig. 1 summarizes the classification

    The 1D hallway model was evaluated first as a benchmark for the feasibility of using local density observations to infer global environmental features like exits. For all 1D simulations, robots moved freely in the ten internal bins, numbered 2–11. Bins 1 and 12 were boundary bins, represented as either a sink or wall,and defined the three potential environment classes. It is assumed a physical implementation would have a person located in the center of the environment deploy a swarm of robots to help predict which of the two potential directions is a viable exit; hence, robots are initially distributed evenly in bins accuracies from this exploration with the results being averaged over 50 trials at the same robot count and time to reduce the impact of statistical variation.

    Fig. 1. A stronger correlation is achieved between locally observed robot density and the environment when more robots are initially distributed. In addition, the robots need a sufficient amount of time to interact with each other and the environment. For example, 9 0% of the environments were classified correctly when 5 00 robots moved about 30 times.

    Adding more robots to the environment generally improved the classification accuracy of the neural network as expected because more interactions occur. However, there are diminishing returns in the classification accuracy with respect to increased robot count, suggesting the system can become saturated. Exploration time is also a key factor in evaluating the effectiveness of correlating environmental features to observations of the local robot density. A minimum of9 moves are required for a robot to reach a potential wall boundary and return to one of the central observation bins so it is not surprising that the classification accuracy was essentially random, independent of robot count, for a time sample of 10. Observing the central robot density after robots moved 20 times significantly increased the classification accuracy but it required 30 moves before the classification accuracy was reliably over 90% for the robot counts considered.

    Based on the results in Fig. 1, a robot count of 5 00 was used for the remaining analysis in 1D as 500 robots appeared to provide a sufficient number of interactions without saturating the environment. A closer look at the classification accuracy of the neural network with 500 robots is shown in Fig. 2(a).The trained network classified the 420 training samples and 180 previously unseen test samples with over 90% accuracy after 30 time steps. In our hypothetical hallway scenario, these results indicate that a person could allow the robots in a swarm to move 30 times and then look at the distribution immediately around them to determine which direction is unobstructed. More realistically, a person would watch the evolution of the robot density around them, which means the neural network should consider the density in bins 6 and 7 at the current time step and all previous time steps. We define this approach as a sequential observation. Using the sequential bin density did decrease the time required to reach a desired classification accuracy as shown in Fig. 2(b). It required observing the number of robots in bins 6 and 7 for about25 robot moves before training environments could be classified with 9 0% accuracy.

    Fig. 2. By observing the number of robots in the center of a simulated 10-bin hallway, a simple neural network could correlate the density with the environment being explored and predict the location of exits with 9 0% accuracy after 5 00 robots move just 30 times whether considering (a) a single time step; or (b) a sequential observation.

    As expected, the initial classification accuracy in both approaches of Fig. 2 was approximately 33%, equivalent to randomly guessing one of the three potential environment classes. As was noted earlier, the 1D scenario requires a minimum of 9 moves for a robot to reach a potential wall boundary and return to one of the central observation bins.This reflection property is affirmed in the results of Fig. 2,where the accuracies begin to steadily climb after time step 9.Fig. 2(b) indicates a slight overtraining of the neural network as the training data was consistently classified with higher accuracy than the testing data. Nonetheless, using a sequential observation of robots in bins 6 and 7, which better model a human observer, did reduce the number of robot moves required to predict the environment class for a desired accuracy as anticipated.

    C. Performance in 2D Environment

    A similar analysis was performed using the 2D simulations.The 2D environment represents an office-type scenario where a person is attempting to identify which wall contains the single, 5-bin doorway in a square, 10×10-bin room by observing the local distribution of robots. An observation center was placed at bin (7,7), near the middle of the environment to represent our hypothetical office employee who is equally distant from all four defining boundaries. A person can deploy robots immediately around them, which corresponds to the simulated robots being initially distributed equally in the eight bins surrounding the observation center.

    With four potential environment classes and 200 simulations per class, a data set with 800 rows was created for each scenario. Each generated data set has eight columns,each corresponding to a single observation bin around the observation center. As in the 1D scenario, the classification accuracy of the neural network was averaged across 50 trials to reduce the impact of statistical variation in the results.

    Once again, an initial simulation was performed to determine an appropriate number of robots for the 2D environment. The number of robots distributed in the eight bins surrounding the observation center was systematically increased from just 100 up to 15 000 in increments of 100. Fig. 3 summarizes the results of this exploration. As anticipated,increasing the size of the environment (i.e., from 10 bins in the 1D scenario to 100 bins in the 2D scenario) also increased the number of individual robots and robot moves to explore the environment.

    Fig. 3. Increasing the environment size meant more robots were needed to perform the classification and each robot needed more moves to encode features. For a 1 0×10 environment, it took 10 000 robots and approximately 40 moves before enough interactions had occured for the neural network to accurately identify 80% of the environments.

    Fig. 4 shows the relative distribution of 10 000 robots after they have moved 40 times in the 2D environment. The environment pictured is a sample from the North Class.Robots can only exit through the north doorway and no robots will re-enter the environment through a doorway, so the adjacent bins have a noticeably lower density of robots.However, the decrease is less distinct in the bins surrounding the observation center at bin (7,7) where the hypothetical office worker is located.

    Fig. 4. A heat map of the robot density for 10 000 robots after 40 time steps in a 2D environment with a north doorway shows a lower density in the northern bins as expected. The difference is less distinct in the bins surrounding the observation center at (7,7).

    If the hypothetical office worker attempted to classify the environment based on the least dense bin in the observation center, they would obtain only a 21% accuracy after 40 robot moves. By contrast, using the density in all 8 bins of the observation center, the neural network classified more than 80% of the environments correctly after approximately 40 time steps as reconfirmed in Fig. 5(a), despite the subtle variation in robot distribution. Using sequential observations around the observation center improved the classification accuracy slightly. Fig. 5(b) shows the test data still required approximately 40 robot moves to reliably reach over 80%classification accuracy. In our hypothetical office scenario, a person near the center of the environment could therefore predict in which wall a doorway was located with 80%accuracy by observing the number of robots around them over the course of about 40 robot moves.

    Fig. 5(a) shows that for about the first 15 moves, the classification accuracy was equivalent to a random guess,similar to the 1D simulations. This is not surprising as at a minimum, a robot initially placed in bin (8,8) (the lower right initialization bin) would need 7 moves to reach either the east or south wall and return to the observed area. A robot in the upper left requires a minimum of 9 moves to return after encountering a wall in the north or west. If a robot moves one bin “l(fā)eft” or “right” during its minimal trajectory, the robot may miss a door and return to falsely inflate the number of robots in an observed bin. This is one reason why the classification accuracy increased much more slowly in the 2D scenarios. The increased number of potential bins to explore also increased the time required for classification.

    The same general behavior extends to the sequential observations shown in Fig. 5(b), though there are signs of overtraining as the training data had a consistently higher classification accuracy than the testing data. Even with these results, the local observations of a robot swarm can be correlated to global environmental features. Indeed, Fig. 5(b)shows that the number of robots around a central bin can be used to form an educated prediction about which wall contains a doorway-even with a simple single-layer neural network and minimalist robots.

    Fig. 5. Locating a doorway in the 2D environment using 8 central bin densities required approximately 40 time steps to reach 8 0% accuracy if using(a) a single time observation; or using (b) observed robot densities at all previous times.

    D. Robustness of the Classification Process

    The robustness of the trained network in Fig. 5(b) at time40 was explored with respect to variations in the environment and a decrease in robot count. For the first investigation, test data was gathered from simulated environments where the doorway had been shifted with respect to the training environments. The doorway width was maintained at five bins and was incrementally moved from the far left (a shift of –3)to the far right (a shift of +2). A total of 60 simulations were conducted per environment variation to ensure a comparable test data size of 2 40 samples per doorway location.

    The resulting classification accuracy for each new doorway position is summarized in Table I. As expected, the highest classification accuracies of 9 7% occured in environments most similar to the training environment. Shifting the doorway three bins left resulted in the lowest classification accuracy of77%because this corner is the furthest from the original training doorway. These results affirm one major advantage of the neural network when compared to a PDE approach—the ability to avoid explicitly describing environmental scenarios. Indeed,the trained neural network still predicted the location of a doorway with 77% accuracy—significantly better than random—even when the doorway was shifted to the far side ofa wall and only partially overlapped the original doorway position.

    TABLE I Classification Accuracy for Shifted Doorway

    The trained neural network can also account for a large loss of robots. For the next robustness investigation, the number of robots in a test environment was systematically reduced from 10 000 to 1000 to simulate potential robot failures. Sixty simulations were run for each environment class, so once again, 240 data samples were used to evaluate the neural network for each reduced robot count. The classification accuracy was averaged over 10 separate trials to reduce the impact of statistical variation. Fig. 6 shows that the classification accuracy decreased as the number of robots was reduced, as expected, but remains at nearly 94% as accurate as the training scenario when only 5000 robots are present. This means that half of the robots can fail, but the worker in our hypothetical building collapse can still predict the environment and be 94% as accurate as if all the robots were still functional. Further, the neural network classified the environment with over 64% of the original accuracy when only 1000 robots are present. Nine out of ten robots can fail,but the network is still able to predict which wall contains the single doorway with better than random accuracy.

    Fig. 6. A network trained on 10 000 robots can still reasonably identify a 2D environment when the system undergoes largescale robot failure.

    V. Extending the Neural Network Results

    Using just one-tenth of the original number of robots, the neural network is ruling out certain environment classes based purely on observations of the local robot distribution. Table II summarizes how the neural network classified the different environments for a single run in a confusion matrix. Sixty of the test samples contained a doorway in the north wall (N),and 36 of those samples were correctly classified; however, 9 of the samples were mistakenly classified as having a door in the west (W). Looking at samples which contained a doorway in the west wall, 38 of the 60 samples were correctly classified while 8 were misclassified as having a door in the north and 14 classified as having a door in the south (S). Zero were classified as the “opposite” where a door was placed in the east (E). Overall, the confusion matrix indicates environments were most rarely confused with their opposite direction which is encouraging.

    TABLE II Confusion Matrix of the Test Data Classification for the 2D, Sequential Observation Scenario With Just 1000 Robots

    Ruling out the least-likely environment by observing robot distributions makes intuitive sense. When the bottom row of observation bins has a low robot count, it is likely that the door is in the south wall but robots may still be escaping east or west with regular frequency so reaching full classification confidence remains difficult. The classification process can be further understood by comparing the relative weight values of the neural network for each environment class. For each class,the highest weight is associated with the corner furthest away from the doorway while the associated row or column also contains generally higher weights. Hence, having a large number of robots in three of the observed bins greatly reduces the likelihood of the opposite wall from containing a doorway.Determining which specific wall contains the doorway is more challenging because now the discriminating weights are much more similar and, as can be seen in Fig. 4, the variation in robot density is less clear.

    Still referencing Fig. 4, the distribution of robots does become more distinct in bins closer to the doorway. This observation led to an update scheme that demonstrates how a person can further leverage observations of the emergent swarm behavior in differing environments to locate viable exits. Specifically, the demonstrated strategy moves the observation center one bin away from the least likely doorway location, analogous to the office worker moving away from the most crowded area.

    The single-layer neural network was again trained using the derived update procedure from (8). Training data came from the scenario shown in Fig. 5(b) with 10 000 robots. New test data was generated with only 1000 robots exploring the same four environment classes, a failure rate of 90%, to show what information can still be obtained about the environment. Sixty simulations were run for each class to generate 240 test samples for consistency with previous experiments.

    During the training process, five separate sets of weights were generated. The center set was trained using density data from the eight bins surrounding the original observation center at bin (7,7) where the office worker is initially standing. These trained center weights were then used to perform an initial classification. If the classification results indicated that the doorway was least likely to be located in the south wall, the office worker moves one bin in the opposite direction so the observation center is now one bin north at (6,7). A new observation of the robot distribution is then taken and used to predict an updated doorway location using a second set of trained weights. The second set of weights is pre-generated using training data from the eight bins surrounding a north observation center. Similar weights are generated for a potential move either east, south, or west.

    Fig. 7 summarizes how the dynamic observation center significantly improved the classification accuracy even when the swarm experienced a drastic 90% failure rate. The test environments were initially classified randomly with an accuracy of about 25% but improved to 40% when considering the number of robots in the surrounding bins for 40 robot moves as shown by the blue line. Moving the observation center one bin opposite the least likely environment class and reclassifying the environment consistently increased the accuracy as shown by the red line in Fig. 7. At time 40, the dynamic observation produced a classification accuracy of 51% and the improvement continued throughout the simulated time.

    Fig. 7. Moving the observation center one bin opposite the least-likely environment increases the classification accuracy.

    In a disaster scenario, it is very likely the terrain will cause some degree of failure in exploratory robots. Our simulation assumed a 90% failure rate, which left just 1000 robots to explore an unknown domain. A person could still observe the local distribution of this swarm for 40 moves to predict in which direction a doorway is located, and they would be about 40%correct. However, if they move once and re-evaluate,their prediction will now be 51% correct. Waiting longer improves both results. In short, a person can regularly update their prediction by moving in a more promising direction and re-evaluating the local robot distribution. Fig. 7 shows that moving just once will consistently improve the person’s ability to accurately predict where a doorway is located even after mass failure of the swarm.

    VI. Concluding Remarks

    Our focus in this work was to exploit the correlation between individual robot behaviors, environmental features,and locally observed robot distributions to reliably predict global environmental features. Using simulated robots equipped with minimum sensing and no communication, we found that the local distribution of robots could be used to accurately infer information about the environment being explored. A simple, single-layer neural network was sufficient for correlating observations of the robot density in a central part of the environment with the location of openings in the environment. The approach was robust with respect to variations in the environment as well as large-scale swarm failure. We demonstrated how trapped office workers could use a simple microprocessor and observations of the local swarm distribution around them to navigate toward unobstructed openings in hallways or office rooms even after 9 out of 10 robots fail.

    This work is a preliminary step in designing swarms of simple, inexpensive robots to explore harsh environments where communication and sensing are unreliable. While there is much to be done to improve the mobility of physical swarms, especially for harsh environments, our work focuses on achieving reliable feature inference given minimal sensing and computational abilities. Our future work will continue building on the general premise of using local observations of emergent swarm behavior to infer environmental features.While our long-term focus is toward increasing the richness of environmental features that can be predicted, we will next focus on implementing a simulated test platform to better quantify the relationship between swarm size, environment size, and identification accuracy for varying swarm behaviors.This platform will also be used to compare the effectiveness of swarms with respect to smaller teams of robots equipped with more sophisticated sensors.

    亚洲国产欧美在线一区| 日韩不卡一区二区三区视频在线| 国产精品国产三级国产专区5o| 少妇的逼好多水| 亚洲精品日韩在线中文字幕| 99re6热这里在线精品视频| 精品亚洲乱码少妇综合久久| 一本久久精品| xxx大片免费视频| av在线观看视频网站免费| 色5月婷婷丁香| 久久精品国产鲁丝片午夜精品| 一区二区三区免费毛片| 精品一区在线观看国产| 国产一区二区三区av在线| 色综合色国产| 赤兔流量卡办理| 干丝袜人妻中文字幕| av国产久精品久网站免费入址| 亚洲美女视频黄频| 国产午夜福利久久久久久| 亚洲欧美成人精品一区二区| 99久久人妻综合| 青春草国产在线视频| 国产av国产精品国产| 午夜老司机福利剧场| 国内精品宾馆在线| 精品久久国产蜜桃| 深夜a级毛片| 纵有疾风起免费观看全集完整版| 久久精品熟女亚洲av麻豆精品| 少妇 在线观看| 亚洲va在线va天堂va国产| 欧美国产精品一级二级三级 | 噜噜噜噜噜久久久久久91| 亚洲自拍偷在线| 大陆偷拍与自拍| 国产亚洲精品久久久com| 丰满乱子伦码专区| 国产一区二区在线观看日韩| 欧美日韩一区二区视频在线观看视频在线 | 国产精品av视频在线免费观看| 插逼视频在线观看| 色综合色国产| 国语对白做爰xxxⅹ性视频网站| 一区二区三区精品91| 夜夜看夜夜爽夜夜摸| 久热这里只有精品99| 国产91av在线免费观看| 久久久久精品性色| 一二三四中文在线观看免费高清| 九九久久精品国产亚洲av麻豆| 午夜福利网站1000一区二区三区| 搞女人的毛片| 国产色婷婷99| 色视频www国产| 国产亚洲精品久久久com| 视频中文字幕在线观看| 国产欧美另类精品又又久久亚洲欧美| 久久久久久九九精品二区国产| 免费人成在线观看视频色| 日本黄色片子视频| 97热精品久久久久久| 激情 狠狠 欧美| 高清日韩中文字幕在线| 自拍偷自拍亚洲精品老妇| 欧美区成人在线视频| 午夜视频国产福利| 99久久人妻综合| 99热全是精品| 最近手机中文字幕大全| 亚洲av一区综合| 黑人高潮一二区| 黄片无遮挡物在线观看| 亚洲欧美一区二区三区黑人 | 日韩一区二区视频免费看| 亚洲美女搞黄在线观看| 国产精品国产三级国产专区5o| a级毛色黄片| 99久久精品国产国产毛片| 久久6这里有精品| 久久ye,这里只有精品| 亚洲精品乱久久久久久| 日韩伦理黄色片| av网站免费在线观看视频| 人妻制服诱惑在线中文字幕| 久久久欧美国产精品| 99久久人妻综合| 亚洲精品成人av观看孕妇| 男插女下体视频免费在线播放| a级毛片免费高清观看在线播放| 在线观看一区二区三区| 日日啪夜夜撸| av网站免费在线观看视频| 国产成人精品久久久久久| 国产免费福利视频在线观看| 国产高清不卡午夜福利| 国产白丝娇喘喷水9色精品| 精品少妇黑人巨大在线播放| 亚洲精品一二三| 国产精品国产av在线观看| 亚洲丝袜综合中文字幕| 国产高清三级在线| 又爽又黄a免费视频| 日本三级黄在线观看| 蜜桃久久精品国产亚洲av| 蜜桃久久精品国产亚洲av| 久久久精品免费免费高清| 成人一区二区视频在线观看| 伊人久久国产一区二区| 99热这里只有精品一区| 丝瓜视频免费看黄片| 国产久久久一区二区三区| 国产精品国产三级国产专区5o| 91精品伊人久久大香线蕉| 97热精品久久久久久| 一区二区三区免费毛片| 大片免费播放器 马上看| 国产黄片视频在线免费观看| 日韩成人av中文字幕在线观看| 国国产精品蜜臀av免费| 国产亚洲91精品色在线| 国语对白做爰xxxⅹ性视频网站| 国产av不卡久久| 大码成人一级视频| 日韩三级伦理在线观看| 国产成人精品婷婷| 精品久久久久久久久av| 久久久国产一区二区| 中文精品一卡2卡3卡4更新| 各种免费的搞黄视频| 久久久久久伊人网av| 亚洲一级一片aⅴ在线观看| 日韩av免费高清视频| 久久ye,这里只有精品| 少妇裸体淫交视频免费看高清| 超碰av人人做人人爽久久| 少妇人妻久久综合中文| 最新中文字幕久久久久| 色网站视频免费| av在线蜜桃| 久久久午夜欧美精品| 久久99精品国语久久久| 亚洲真实伦在线观看| 欧美日韩在线观看h| 美女主播在线视频| 亚洲美女视频黄频| 国产成人福利小说| 久久久久久国产a免费观看| 丰满乱子伦码专区| 69人妻影院| 大片免费播放器 马上看| 亚洲欧美清纯卡通| 久久久久网色| 天堂俺去俺来也www色官网| 狠狠精品人妻久久久久久综合| 99热这里只有精品一区| 国产一区有黄有色的免费视频| 天堂网av新在线| 亚洲精品一二三| 国产精品99久久久久久久久| 成人高潮视频无遮挡免费网站| 三级经典国产精品| 身体一侧抽搐| 亚洲精品亚洲一区二区| 美女主播在线视频| 亚洲av男天堂| 日韩三级伦理在线观看| 色哟哟·www| 国产高清国产精品国产三级 | 欧美bdsm另类| 国产精品人妻久久久影院| 国产视频首页在线观看| 视频中文字幕在线观看| 97热精品久久久久久| 毛片女人毛片| 熟女人妻精品中文字幕| 午夜激情福利司机影院| 99热全是精品| 国产欧美日韩一区二区三区在线 | 少妇人妻精品综合一区二区| 亚洲欧美日韩另类电影网站 | 精品午夜福利在线看| 国产视频首页在线观看| 欧美激情在线99| 美女内射精品一级片tv| 一级毛片我不卡| 五月天丁香电影| 乱系列少妇在线播放| 日韩一区二区三区影片| 国产精品久久久久久久久免| 欧美成人午夜免费资源| 99久久人妻综合| 国产一区有黄有色的免费视频| 国产亚洲av片在线观看秒播厂| 香蕉精品网在线| 亚洲人成网站高清观看| 精品视频人人做人人爽| 久久精品熟女亚洲av麻豆精品| av专区在线播放| 97超碰精品成人国产| 伊人久久精品亚洲午夜| 午夜福利高清视频| 亚洲精品,欧美精品| 久久久成人免费电影| 亚洲国产欧美人成| 精品久久久精品久久久| 天堂俺去俺来也www色官网| 香蕉精品网在线| 国产 一区 欧美 日韩| 国产成人freesex在线| 一个人看的www免费观看视频| 网址你懂的国产日韩在线| 边亲边吃奶的免费视频| 国产精品成人在线| 成人国产麻豆网| 亚洲国产欧美人成| 免费观看无遮挡的男女| 最近2019中文字幕mv第一页| 嘟嘟电影网在线观看| 新久久久久国产一级毛片| 日韩欧美一区视频在线观看 | 少妇人妻久久综合中文| 在线观看三级黄色| 91久久精品国产一区二区三区| 综合色av麻豆| 欧美日韩视频高清一区二区三区二| 精品少妇久久久久久888优播| 91久久精品国产一区二区成人| 女人十人毛片免费观看3o分钟| 国产免费视频播放在线视频| 国产亚洲av嫩草精品影院| 亚洲欧美一区二区三区国产| 亚洲成人一二三区av| 亚洲人成网站高清观看| 熟女电影av网| 黑人高潮一二区| 久久精品综合一区二区三区| 黄色视频在线播放观看不卡| 美女被艹到高潮喷水动态| 中文欧美无线码| 免费观看无遮挡的男女| 久久精品国产鲁丝片午夜精品| 一级毛片aaaaaa免费看小| 尾随美女入室| 国产老妇女一区| 汤姆久久久久久久影院中文字幕| 国产爱豆传媒在线观看| 2021天堂中文幕一二区在线观| 久久国产乱子免费精品| 久久久精品94久久精品| 黄色怎么调成土黄色| 一区二区三区免费毛片| 丝袜脚勾引网站| 中文在线观看免费www的网站| 中文在线观看免费www的网站| 又黄又爽又刺激的免费视频.| 成人一区二区视频在线观看| 国产毛片在线视频| 夜夜看夜夜爽夜夜摸| 神马国产精品三级电影在线观看| 亚洲熟女精品中文字幕| 日韩三级伦理在线观看| 插阴视频在线观看视频| 亚洲成人一二三区av| 免费黄频网站在线观看国产| 久久亚洲国产成人精品v| 色网站视频免费| 亚洲欧美精品专区久久| 亚洲伊人久久精品综合| 97在线视频观看| 69av精品久久久久久| 国产精品嫩草影院av在线观看| 午夜激情福利司机影院| 精品亚洲乱码少妇综合久久| 久久久欧美国产精品| 我的老师免费观看完整版| 亚洲av.av天堂| 一级毛片久久久久久久久女| 美女cb高潮喷水在线观看| 干丝袜人妻中文字幕| 国产综合懂色| 国产伦理片在线播放av一区| 欧美xxⅹ黑人| 国产一区二区在线观看日韩| a级毛片免费高清观看在线播放| 久久综合国产亚洲精品| 一区二区三区乱码不卡18| 韩国高清视频一区二区三区| 小蜜桃在线观看免费完整版高清| 亚洲欧美日韩东京热| 午夜日本视频在线| 高清日韩中文字幕在线| 亚洲av一区综合| 麻豆成人午夜福利视频| 久久精品国产亚洲av天美| 波多野结衣巨乳人妻| 插逼视频在线观看| 五月玫瑰六月丁香| 亚洲,一卡二卡三卡| 精品人妻熟女av久视频| 高清毛片免费看| 日韩在线高清观看一区二区三区| 国产男女超爽视频在线观看| 日韩欧美精品v在线| 超碰av人人做人人爽久久| 好男人在线观看高清免费视频| 秋霞在线观看毛片| 中国美白少妇内射xxxbb| 国产午夜福利久久久久久| 亚洲精品成人久久久久久| 夫妻午夜视频| 日韩欧美精品免费久久| 久久精品国产亚洲av天美| 国产伦理片在线播放av一区| 大话2 男鬼变身卡| 最近最新中文字幕免费大全7| 在线观看三级黄色| 亚洲婷婷狠狠爱综合网| av线在线观看网站| 精品国产一区二区三区久久久樱花 | 日本wwww免费看| 久久99热这里只频精品6学生| 免费观看的影片在线观看| 国产欧美日韩一区二区三区在线 | 天堂网av新在线| 少妇高潮的动态图| 亚洲欧美日韩无卡精品| 日韩av免费高清视频| 最近手机中文字幕大全| 国产 精品1| 最近最新中文字幕大全电影3| 欧美潮喷喷水| 看十八女毛片水多多多| 精品国产乱码久久久久久小说| 又粗又硬又长又爽又黄的视频| 久久精品久久精品一区二区三区| 亚洲四区av| 尤物成人国产欧美一区二区三区| 日韩成人伦理影院| 日本-黄色视频高清免费观看| 我的女老师完整版在线观看| 国产高清有码在线观看视频| 秋霞在线观看毛片| 看黄色毛片网站| 亚洲欧美精品自产自拍| 亚洲伊人久久精品综合| 听说在线观看完整版免费高清| 最近手机中文字幕大全| 熟妇人妻不卡中文字幕| 内射极品少妇av片p| 亚洲一区二区三区欧美精品 | 建设人人有责人人尽责人人享有的 | 亚洲精品久久久久久婷婷小说| 亚洲自偷自拍三级| 欧美xxxx黑人xx丫x性爽| 亚洲国产精品国产精品| 成人免费观看视频高清| 亚洲欧美日韩无卡精品| 青春草亚洲视频在线观看| 最近的中文字幕免费完整| 波多野结衣巨乳人妻| 久久久亚洲精品成人影院| 精品久久国产蜜桃| 亚洲国产精品成人久久小说| 99热这里只有是精品50| 国产精品久久久久久久电影| 欧美性猛交╳xxx乱大交人| 午夜精品一区二区三区免费看| 大陆偷拍与自拍| 国产精品秋霞免费鲁丝片| 亚洲国产精品成人综合色| 香蕉精品网在线| 免费大片黄手机在线观看| 日本wwww免费看| 精品久久久久久久末码| 久久久久久久国产电影| 亚洲国产日韩一区二区| 国产真实伦视频高清在线观看| av一本久久久久| 99热全是精品| 色5月婷婷丁香| 亚洲欧美日韩无卡精品| 最近中文字幕高清免费大全6| 香蕉精品网在线| 91狼人影院| 深夜a级毛片| av国产精品久久久久影院| 成人亚洲精品av一区二区| 黄色一级大片看看| 国产一区亚洲一区在线观看| 一本久久精品| 特级一级黄色大片| 午夜激情久久久久久久| 国产视频内射| 三级男女做爰猛烈吃奶摸视频| 99久久中文字幕三级久久日本| 丰满少妇做爰视频| 精品一区二区三卡| 神马国产精品三级电影在线观看| 深爱激情五月婷婷| 成年av动漫网址| 成年女人看的毛片在线观看| 色哟哟·www| 亚洲av不卡在线观看| 精品国产一区二区三区久久久樱花 | 免费看av在线观看网站| 免费观看在线日韩| 国产亚洲最大av| videos熟女内射| 在线观看av片永久免费下载| 亚洲av一区综合| 中文在线观看免费www的网站| 人妻夜夜爽99麻豆av| 精品少妇久久久久久888优播| 亚洲国产精品国产精品| 久久热精品热| av国产精品久久久久影院| av黄色大香蕉| 神马国产精品三级电影在线观看| 视频中文字幕在线观看| 男人添女人高潮全过程视频| 国产黄色免费在线视频| av在线播放精品| 人妻 亚洲 视频| 日韩欧美精品v在线| 国产亚洲av嫩草精品影院| 成人午夜精彩视频在线观看| av国产免费在线观看| 精品久久久精品久久久| 少妇的逼水好多| 国产伦精品一区二区三区视频9| 在线 av 中文字幕| 黄色日韩在线| 欧美另类一区| 少妇熟女欧美另类| 夫妻性生交免费视频一级片| 中文精品一卡2卡3卡4更新| 超碰97精品在线观看| av网站免费在线观看视频| videossex国产| 国产精品人妻久久久久久| 久久精品久久久久久噜噜老黄| 18禁在线无遮挡免费观看视频| 99热这里只有是精品50| 神马国产精品三级电影在线观看| 欧美xxxx黑人xx丫x性爽| 欧美激情国产日韩精品一区| 国产成年人精品一区二区| 国产亚洲一区二区精品| 久久女婷五月综合色啪小说 | 性插视频无遮挡在线免费观看| 国产一区有黄有色的免费视频| 街头女战士在线观看网站| 99热全是精品| 91午夜精品亚洲一区二区三区| 少妇裸体淫交视频免费看高清| 精品99又大又爽又粗少妇毛片| 亚洲成人av在线免费| 国产视频首页在线观看| 亚洲美女搞黄在线观看| 一区二区三区精品91| av线在线观看网站| 亚洲av中文av极速乱| 嫩草影院入口| 免费人成在线观看视频色| 少妇丰满av| 人妻少妇偷人精品九色| 99久久九九国产精品国产免费| 爱豆传媒免费全集在线观看| 99热这里只有是精品在线观看| 久久鲁丝午夜福利片| 欧美日韩视频高清一区二区三区二| 免费看日本二区| 国产黄色免费在线视频| 好男人视频免费观看在线| 91精品一卡2卡3卡4卡| 草草在线视频免费看| 下体分泌物呈黄色| 国产精品一区二区三区四区免费观看| 久久久国产一区二区| 尤物成人国产欧美一区二区三区| 亚洲精品视频女| 亚洲国产精品成人久久小说| 我的老师免费观看完整版| 毛片女人毛片| 伊人久久精品亚洲午夜| 夫妻午夜视频| 久久精品国产鲁丝片午夜精品| 麻豆久久精品国产亚洲av| 国精品久久久久久国模美| 免费人成在线观看视频色| 哪个播放器可以免费观看大片| 2021少妇久久久久久久久久久| 日本熟妇午夜| 舔av片在线| 又黄又爽又刺激的免费视频.| av在线观看视频网站免费| 免费观看的影片在线观看| 国产成人一区二区在线| 国产成人精品婷婷| 亚洲精品乱码久久久v下载方式| 大话2 男鬼变身卡| 日韩欧美精品免费久久| 欧美+日韩+精品| 欧美一区二区亚洲| 亚洲国产精品999| 五月玫瑰六月丁香| 中文字幕制服av| 日产精品乱码卡一卡2卡三| 午夜福利视频1000在线观看| 欧美激情国产日韩精品一区| 色哟哟·www| 中文字幕人妻熟人妻熟丝袜美| 国产极品天堂在线| 亚洲精品成人av观看孕妇| 91久久精品电影网| 午夜福利视频1000在线观看| 亚洲欧美日韩东京热| 一级毛片电影观看| 久热这里只有精品99| 丰满乱子伦码专区| 久久久欧美国产精品| 高清av免费在线| 亚洲av在线观看美女高潮| 日韩欧美精品v在线| 国产人妻一区二区三区在| 人人妻人人澡人人爽人人夜夜| 国产av不卡久久| 蜜桃亚洲精品一区二区三区| 久久国产乱子免费精品| 欧美xxxx性猛交bbbb| 亚洲精品乱码久久久久久按摩| 国产精品99久久99久久久不卡 | 男人爽女人下面视频在线观看| 国产成人freesex在线| 激情 狠狠 欧美| 精品一区二区三卡| 日本猛色少妇xxxxx猛交久久| 丝瓜视频免费看黄片| 亚洲精品成人久久久久久| 在线亚洲精品国产二区图片欧美 | 欧美xxⅹ黑人| 久久午夜福利片| 天天一区二区日本电影三级| a级毛片免费高清观看在线播放| 国产探花在线观看一区二区| 日本猛色少妇xxxxx猛交久久| 夜夜爽夜夜爽视频| 久久99热这里只有精品18| 国产v大片淫在线免费观看| 亚洲精品自拍成人| 久久精品夜色国产| av在线观看视频网站免费| 久久精品久久久久久久性| 神马国产精品三级电影在线观看| 性色avwww在线观看| 白带黄色成豆腐渣| 午夜免费鲁丝| a级一级毛片免费在线观看| 在线观看三级黄色| av在线老鸭窝| 国产精品一及| 免费黄频网站在线观看国产| 国产精品久久久久久精品电影| 国产精品久久久久久久久免| 精品国产乱码久久久久久小说| 人妻系列 视频| 久久久亚洲精品成人影院| 久久久久网色| 寂寞人妻少妇视频99o| 欧美日韩国产mv在线观看视频 | 美女被艹到高潮喷水动态| 在线观看三级黄色| 免费看不卡的av| 女人十人毛片免费观看3o分钟| 国产一区二区三区综合在线观看 | 亚洲精品第二区| 老司机影院毛片| 在现免费观看毛片| 22中文网久久字幕| 国产黄色视频一区二区在线观看| 精品熟女少妇av免费看| 老司机影院成人| 亚洲电影在线观看av| 日韩免费高清中文字幕av| 国产老妇女一区| 大片免费播放器 马上看| 网址你懂的国产日韩在线| 毛片一级片免费看久久久久| 亚洲经典国产精华液单| 色播亚洲综合网| 一个人看的www免费观看视频| 午夜免费鲁丝| 女人久久www免费人成看片| 97热精品久久久久久| 欧美区成人在线视频| 高清欧美精品videossex| 午夜福利网站1000一区二区三区| 男插女下体视频免费在线播放| .国产精品久久| 欧美xxxx黑人xx丫x性爽| 夫妻性生交免费视频一级片| 婷婷色综合www| 欧美精品人与动牲交sv欧美| 久久精品夜色国产| 日韩av在线免费看完整版不卡| 国产美女午夜福利| 国产国拍精品亚洲av在线观看| 免费看不卡的av| 国产免费视频播放在线视频| 三级国产精品欧美在线观看| 性色av一级| 99久久精品热视频| 久久这里有精品视频免费|