site stats

Sort by Research Topics:

Sort by year:

Mirroring without Overimitation: Learning Functionally Equivalent Manipulation Actions


AAAI 2019, Honolulu, Hawaii

Conference Papers Functional Manipulation Physics-based Simulation

Abstract

Coming soon.

MetaStyle: Three-Way Trade-Off Among Speed, Flexibility and Quality in Neural Style Transfer

Chi Zhang, Yixin Zhu, Song-Chun Zhu.


AAAI 2019, Honolulu, Hawaii

Conference Papers Optimization

Abstract

Coming soon.

Cooperative Holisctic Scene Understanding: Unifying 3D Object, Layout, and Camera Pose Estimation

Siyuan Huang, Siyuan Qi, Yinxue Xiao, Yixin Zhu, Ying Nian Wu, Song-Chun Zhu.


NIPS 2018, Montréal, Canada

Conference Papers Functional Object and Scene Understanding

Abstract

Coming soon.

Holistic 3D Scene Parsing and Reconstruction from a Single RGB Image

Siyuan Huang, Siyuan Qi, Yixin Zhu, Yinxue Xiao, Yuanlu Xu, Song-Chun Zhu.


ECCV 2018, Munich, Germany

Conference Papers Functional Object and Scene Understanding

Abstract

We propose a computational framework to jointly parse a single RGB image and reconstruct a holistic 3D configuration composed by a set of CAD models using a stochastic grammar model. Specifically, we introduce a Holistic Scene Grammar (HSG) to represent the 3D scene structure, which characterizes a joint distribution over the functional and geometric space of indoor scenes. The proposed Holistic Scene Grammar (HSG) captures three essential and often latent dimensions of the indoor scenes: i) latent human context, describing the affordance and the functionality of a room arrangement, ii) geometric constraints over the scene configurations, and iii) physical constraints that guarantee physically plausible parsing and reconstruction. We solve this joint parsing and reconstruction problem in an analysis-by-synthesis fashion, seeking to minimize the differences between the input image and the rendered images generated by our 3D representation, over the space of depth, surface normal, and object segmentation map. The optimal configuration, represented by a parse graph, is inferred using Markov chain Monte Carlo (MCMC), which efficiently traverses through the non-differentiable solution space, jointly optimizing object localization, 3D layout, and hidden human context. Experimental results demonstrate that the proposed algorithm improves the generalization ability and significantly outperforms prior methods on 3D layout estimation, 3D object detection, and holistic scene understanding.

Configurable 3D Scene Synthesis and 2D Image Rendering with Per-Pixel Ground Truth using Stochastic Grammars


IJCV 2018
Short version published in CVPR 2018

Journal Papers Functional Object and Scene Understanding Physics-based Simulation

Abstract

We propose a systematic learning-based approach to the generation of massive quantities of synthetic 3D scenes and arbitrary numbers of photorealistic 2D images thereof, with associated ground truth information, for the purposes of training, benchmarking, and diagnosing learning-based computer vision and robotics algorithms. In particular, we devise a learning-based pipeline of algorithms capable of automatically generating and rendering a potentially infinite variety of indoor scenes by using a stochastic grammar, represented as an attributed Spatial And-Or Graph, in conjunction with state-of-the-art physics-based rendering. Our pipeline is capable of synthesizing scene layouts with high diversity, and it is configurable inasmuch as it enables the precise customization and control of important attributes of the generated scenes. It renders photorealistic RGB images of the generated scenes while automatically synthesizing detailed, per-pixel ground truth data, including visible surface depth and normal, object identity, and material information (detailed to object parts), as well as environments (e.g., illuminations and camera viewpoints). We demonstrate the value of our synthesized dataset, by improving performance in certain machine-learning-based scene understanding tasks--depth and surface normal prediction, semantic segmentation, reconstruction, etc.--and by providing benchmarks for and diagnostics of trained models by modifying object attributes and scene properties in a controllable manner.

Human Causal Transfer: Challenges for Deep Reinforcement Learning



CogSci 2018, Madison, Wisconsin

Conference Papers (Oral) Causal Reasoning

Abstract

Discovery and application of causal knowledge in novel problem contexts is a prime example of human intelligence. As new information is obtained from the environment during interactions, people develop and refine causal schemas to establish a parsimonious explanation of underlying problem constraints. The aim of the current study is to systematically examine human ability to discover causal schemas by exploring the environment and transferring knowledge to new situations with greater or different structural complexity. We developed a novel OpenLock task, in which participants explored a virtual "escape room" environment by moving levers that served as ``locks'' to open a door. In each situation, the sequential movements of the levers that opened the door formed a branching causal sequence that began with either a common-cause (CC) or a common-effect (CE) structure. Participants in a baseline condition completed five trials with high structural complexity (i.e., four active levers). Those in the transfer conditions completed six training trials with low structural complexity (i.e., three active levers) before completing a high-complexity transfer trial. The causal schema acquired in the transfer condition was either congruent or incongruent with that in the transfer condition. Baseline performance under the CC schema was superior to performance under the CE schema, and schema congruency facilitated transfer performance when the congruent schema was the less difficult CC schema. We compared between-subjects human performance to a deep reinforcement learning model and found that a standard deep reinforcement learning model (DDQN) is unable to capture the causal abstraction presented between trials with the same causal schema and trials with a transfer of causal schema.

A Moving Least Squares Material Point Method with Displacement Discontinuity and Two-Way Rigid Body Coupling


SIGGRAPH 2018, Vancouver, Canada

Journal Papers Conference Papers (Oral) Physics-based Simulation

Abstract

In this paper, we introduce the Moving Least Squares Material Point Method (MLS-MPM). MLS-MPM naturally leads to the formulation of Affine Particle-In-Cell (APIC) and Polynomial Particle-In-Cell in a way that is consistent with a Galerkin-style weak form discretization of the governing equations. Additionally, it enables a new stress divergence discretization that effortlessly allows all MPM simulations to run two times faster than before. We also develop a Compatible Particle-In-Cell (CPIC) algorithm on top of MLS-MPM. Utilizing a colored distance field representation and a novel compatibility condition for particles and grid nodes, our framework enables the simulation of various new phenomena that are not previously supported by MPM, including material cutting, dynamic open boundaries, and two-way coupling with rigid bodies. MLS-MPM with CPIC is easy to implement and friendly to performance optimization.

Human-centric Indoor Scene Synthesis Using Stochastic Grammar


CVPR 2018, Salt Lake City, Utah

Conference Papers Functional Object and Scene Understanding

Abstract

We present a human-centric method to sample and synthesize 3D room layouts and 2D images thereof, for the purpose of obtaining large-scale 2D/3D image data with the perfect per-pixel ground truth. An attributed spatial And-Or graph (S-AOG) is proposed to represent indoor scenes. The S-AOG is a probabilistic grammar model, in which the terminal nodes are object entities including room, furniture, and supported objects. Human contexts as contextual relations are encoded by Markov Random Fields (MRF) on the terminal nodes. We learn the distributions from an indoor scene dataset and sample new layouts using Monte Carlo Markov Chain. Experiments demonstrate that the proposed method can robustly sample a large variety of realistic room layouts based on three criteria: (i) visual realism comparing to a state-of-the-art room arrangement method, (ii) accuracy of the affordance maps with respect to ground-truth, and (ii) the functionality and naturalness of synthesized rooms evaluated by human subjects.

Interactive Robot Knowledge Patching using Augmented Reality

Hangxin Liu*, Yaofang Zhang*, Wenwen Si, Xu Xie, Yixin Zhu, Song-Chun Zhu.
* equal contributors

ICRA 2018, Brisbane, Australia

Conference Papers Functional Manipulation

Abstract

We present a novel Augmented Reality (AR) approach, through Microsoft HoloLens, to address the challenging problems of diagnosing, teaching, and patching interpretable knowledge of a robot. A Temporal And-Or graph (T-AOG) of opening bottles is learned from human demonstration and programmed to the robot. This representation yields a hierarchical structure that captures the compositional nature of the given task, which is highly interpretable for the users. By visualizing the knowledge structure represented by the T-AOG and the decision making process by parsing a T-AOG, the user can intuitively understand what the robot knows, supervise the robot's action planner, and monitor visually latent robot states (e.g., the force exerted during interactions). Given a new task, through such comprehensive visualizations of robot's inner functioning, users can quickly identify the reasons of failures, interactively teach the robot with a new action, and patch it to the knowledge structure represented by the T-AOG. In this way, the robot is capable of solving similar but new tasks only through minor modifications provided by the users interactively. This process demonstrates the interpretability of our knowledge representation and the effectiveness of the AR interface.

Unsupervised Learning using Hierarchical Models for Hand-Object Interactions



ICRA 2018, Brisbane, Australia

Conference Papers Functional Manipulation

Abstract

Contact forces of the hand are visually unobservable, but play a crucial role in understanding hand-object interactions. In this paper, we propose an unsupervised learning approach for manipulation event segmentation and manipulation event parsing. The proposed framework incorporates hand pose kinematics and contact forces using a low-cost easy-to-replicate tactile glove. We use a temporal grammar model to capture the hierarchical structure of events, integrating extracted force vectors from the raw sensory input of poses and forces. The temporal grammar is represented as a temporal And-Or graph (T-AOG), which can be induced in an unsupervised manner. We obtain the event labeling sequences by measuring the similarity between segments using the Dynamic Time Alignment Kernel (DTAK). Experimental results show that our method achieves high accuracy in manipulation event segmentation, recognition and parsing by utilizing both pose and force data.

Spatially Perturbed Collision Sounds Attenuate Perceived Causality in 3D Launching Events

Duotun Wang*, James Kubricht*, Yixin Zhu*, Wei Liang†, Song-Chun Zhu, Chenfanfu Jiang, Hongjing Lu.
* equal contributors | † Corresponding Author


IEEE VR 2018, Reutlingen, Germany

Conference Papers (Oral) Causal Reasoning

Abstract

When a moving object collides with an object at rest, people immediately perceive a causal event: i.e., the first object has launched the second object forwards. However, when the second object's motion is delayed, or is accompanied by a collision sound, causal impressions attenuate and strengthen. Despite a rich literature on causal perception, researchers have exclusively utilized 2D visual displays to examine the launching effect. It remains unclear whether people are equally sensitive to the spatiotemporal properties of observed collisions in the real world. The present study first examined whether previous findings in causal perception with audiovisual inputs can be extended to immersive 3D virtual environments. We then investigated whether perceived causality is influenced by variations in the spatial position of an auditory collision indicator. We found that people are able to localize sound positions based on auditory inputs in VR environments, and spatial discrepancy between the estimated position of the collision sound and the visually observed impact location attenuates perceived causality.

Tracking Occluded Objects and Recovering Incomplete Trajectories by Reasoning about Containment Relations and Human Actions

Wei Liang, Yixin Zhu, Song-Chun Zhu.


AAAI 2018, New Orleans, Louisiana

Conference Papers (Spotlight) Functional Object and Scene Understanding

Abstract

This paper studies a challenging problem of tracking severely occluded objects in long video sequences. The proposed method reasons about the containment relations and human actions, thus infers and recovers occluded objects identities while contained or blocked by others. There are two conditions that lead to incomplete trajectories: i) Contained. The occlusion is caused by a containment relation formed between two objects, e.g., an unobserved laptop inside a backpack forms containment relation between the laptop and the backpack. ii) Blocked. The occlusion is caused by other objects blocking the view from certain locations, during which the containment relation does not change. By explicitly distinguishing these two causes of occlusions, the proposed algorithm formulates tracking problem as a network flow representation encoding containment relations and their changes. By assuming all the occlusions are not spontaneously happened but only triggered by human actions, an MAP inference is applied to jointly interpret the trajectory of an object by detection in space and human actions in time. To quantitatively evaluate our algorithm, we collect a new occluded object dataset captured by Kinect sensor, including a set of RGB-D videos and human skeletons with multiple actors, various objects, and different changes of containment relations. In the experiments, we show that the proposed method demonstrates better performance on tracking occluded objects compared with baseline methods.

Feeling the Force: Integrating Force and Pose for Fluent Discovery through Imitation Learning to Open Medicine Bottles


IROS 2017, Vancouver, Canada
Short version published in CoRL 2017

Conference Papers (Oral) Functional Manipulation

Abstract

Learning complex robot manipulation policies for real-world objects is challenging, often requiring significant tuning within controlled environments. In this paper, we learn a manipulation model to execute tasks with multiple stages and variable structure, which typically are not suitable for most robot manipulation approaches. The model is learned from human demonstration using a tactile glove that measures both hand pose and contact forces. The tactile glove enables observation of visually latent changes in the scene, specifically the forces imposed to unlock the child-safety mechanisms of medicine bottles. From these observations, we learn an action planner through both a top-down stochastic grammar model (And-Or graph) to represent the compositional nature of the task sequence and a bottom-up discriminative model from the observed poses and forces. These two terms are combined during planning to select the next optimal action. We present a method for transferring this human-specific knowledge onto a robot platform and demonstrate that the robot can perform successful manipulations of unseen objects with similar task structure.

A Glove-based System for Studying Hand-Object Manipulation via Joint Pose and Force Sensing


IROS 2017, Vancouver, Canada

Conference Papers (Oral) Functional Manipulation

Abstract

We present a design of an easy-to-replicate glove-based system that can reliably perform simultaneous hand pose and force sensing in real time, for the purpose of collecting human hand data during fine manipulative actions. The design consists of a sensory glove that is capable of jointly collecting data of finger poses, hand poses, as well as forces on palm and each phalanx. Specifically, the sensory glove employs a network of 15 IMUs to measure the rotations between individual phalanxes. Hand pose is then reconstructed using forward kinematics. Contact forces on the palm and each phalanx are measured by 6 customized force sensors made from Velostat, a piezoresistive material whose force-voltage relation is investigated. We further develop an open-source software pipeline consisting of drivers and processing code and a system for visualizing hand actions that is compatible with the popular Raspberry Pi architecture. In our experiment, we conduct a series of evaluations that quantitatively characterize both individual sensors and the overall system, proving the effectiveness of the proposed design.

Consistent Probabilistic Simulation Underlying Human Judgment in Substance Dynamics


CogSci 2017, London, UK

Conference Papers (Oral) Intuitive Physics Physics-based Simulation

Abstract

A growing body of evidence supports the hypothesis that humans infer future states of perceived physical situations by propagating noisy representations forward in time using rational (approximate) physics. In the present study, we examine whether humans are able to predict (1) the resting geometry of sand pouring from a funnel and (2) the dynamics of three substances---liquid, sand, and rigid balls---flowing past obstacles into two basins. Participants' judgments in each experiment are consistent with simulation results from the intuitive substance engine (ISE) model, which employs a Material Point Method (MPM) simulator with noisy inputs. The ISE outperforms ground-truth physical models in each situation, as well as two data-driven models. The results reported herein expand on previous work proposing human use of mental simulation in physical reasoning and demonstrate human proficiency in predicting the dynamics of sand, a substance that is less common in daily life than liquid or rigid objects.

Visuomotor Adaptation and Sensory Recalibration in Reversed Hand Movement Task

Jenny Lin*, Yixin Zhu*, James Kubricht*, Song-Chun Zhu, Hongjing Lu.
* equal contributors

CogSci 2017, London, UK

Conference Papers Mental Simulation

Abstract

Visuomotor adaptation plays an important role in motor planning and execution. However, it remains unclear how sensorimotor transformations are recalibrated when visual and proprioceptive feedback are decoupled. To address this question, the present study asked participants to reach toward targets in a virtual reality (VR) environment. They were given visual feedback of their arm movements in VR that was either consistent (normal motion) with the virtual world or reflected (reversed motion) with respect to the left-right and vertical axes. Participants completed two normal motion experimental sessions, with a reversed motion session in between. While reaction time in the reversed motion session was longer than in the normal motion session, participants showed the learning improvement by completing trials in the second normal motion session faster than in the first. The reduction in reaction time was found to correlate with greater use of linear reaching trajectory strategies (measured using dynamic time warping) in the reversed and second normal motion sessions. This result appears consistent with linear motor movement planning guided by increased attention to visual feedback. Such strategical bias persisted into the second normal motion session. Participants in the reversed session were grouped into two clusters depending on their preference for proximal/distal and awkward/smooth motor movements. We found that participants who preferred distal-smooth movements produced more linear trajectories than those who preferred proximal-awkward movements.

The Martian: Examining Human Physical Judgments Across Virtual Gravity Fields

Tian Ye*, Siyuan Qi*, James Kubricht, Yixin Zhu, Hongjing Lu, Song-Chun Zhu.
* equal contributors

IEEE VR 2017, Los Angeles, California
Accpted to TVCG

Journal Papers Conference Papers (Oral) Intuitive Physics

Abstract

This paper examines how humans adapt to novel physical situations with unknown gravitational acceleration in immersive virtual environments. We designed four virtual reality experiments with different tasks for participants to complete: strike a ball to hit a target, trigger a ball to hit a target, predict the landing location of a projectile, and estimate the flight duration of a projectile. The first two experiments compared human behavior in the virtual environment with real-world performance reported in the literature. The last two experiments aimed to test the human ability to adapt to novel gravity fields by measuring their performance in trajectory prediction and time estimation tasks. The experiment results show that: 1) based on brief observation of a projectile's initial trajectory, humans are accurate at predicting the landing location even under novel gravity fields, and 2) humans' time estimation in a familiar earth environment fluctuates around the ground truth flight duration, although the time estimation in unknown gravity fields indicates a bias toward earth's gravity.

A Virtual Reality Platform for Dynamic Human-Scene Interaction

Jenny Lin*, Xingwen Guo*, Jingyu Shao*, Chenfanfu Jiang, Yixin Zhu, Song-Chun Zhu.
* equal contributors

SIGGRAPH Asia 2016 Workshop on Virtual Reality meets Physical Reality, Macao, China

Workshop Papers (Oral) Physics-based Simulation

Abstract

Both synthetic static and simulated dynamic 3D scene data is highly useful in the fields of computer vision and robot task planning. Yet their virtual nature makes it difficult for real agents to interact with such data in an intuitive way. Thus currently available datasets are either static or greatly simplified in terms of interactions and dynamics. In this paper, we propose a system in which Virtual Reality and human / finger pose tracking is integrated to allow agents to interact with virtual environments in real time. Segmented object and scene data is used to construct a scene within Unreal Engine 4, a physics-based game engine. We then use the Oculus Rift headset with a Kinect sensor, Leap Motion controller and a dance pad to navigate and manipulate objects inside synthetic scenes in real time. We demonstrate how our system can be used to construct a multi-jointed agent representation as well as fine-grained finger pose. In the end, we propose how our system can be used for robot task planning and image semantic segmentation.

What is Where: Inferring Containment Relations from Videos


IJCAI 2016, New York City, New York

Conference Papers (Oral) Functional Object and Scene Understanding

Abstract

In this paper, we present a probabilistic approach to explicitly infer containment relations between objects in 3D scenes. Given an input RGB-D video, our algorithm quantizes the perceptual space of a 3D scene by reasoning about containment relations over time. At each frame, we represent the containment relations in space by a containment graph, where each vertex represents an object and each edge represents a containment relation. We assume that human actions are the only cause that leads to containment relation changes over time, and classify human actions into four types of events: movein, move-out, no-change and paranormal-change. Here, paranomal-change refers to the events that are physically infeasible, and thus are ruled out through reasoning. A dynamic programming algorithm is adopted to finding both the optimal sequence of containment relations across the video, and the containment relation changes between adjacent frames. We evaluate the proposed method on our dataset with 1326 video clips taken in 9 indoor scenes, including some challenging cases, such as heavy occlusions and diverse changes of containment relations. The experimental results demonstrate good performance on the dataset.

Inferring Forces and Learning Human Utilities From Videos

Yixin Zhu*, Chenfanfu Jiang*, Yibiao Zhao, Demetri Terzopoulos, Song-Chun Zhu.
* equal contributors

CVPR 2016, Las Vegas, Nevada
Short version published in SIGGRAPH Asia 2016 Workshop on Virtual Reality meets Physical Reality, Macao, China

Conference Papers (Oral) Workshop Papers Functional Object and Scene Understanding Physics-based Simulation

Abstract

We propose a notion of affordance that takes into account physical quantities generated when the human body interacts with real-world objects, and introduce a learning framework that incorporates the concept of human utilities, which in our opinion provides a deeper and finer-grained account not only of object affordance but also of people's interaction with objects. Rather than defining affordance in terms of the geometric compatibility between body poses and 3D objects, we devise algorithms that employ physics-based simulation to infer the relevant forces/pressures acting on body parts. By observing the choices people make in videos (particularly in selecting a chair in which to sit) our system learns the comfort intervals of the forces exerted on body parts (while sitting). We account for people's preferences in terms of human utilities, which transcend comfort intervals to account also for meaningful tasks within scenes and spatiotemporal constraints in motion planning, such as for the purposes of robot task planning.

Probabilistic Simulation Predicts Human Performance on Viscous Fluid-Pouring Problem


CogSci 2016, Philadelphia, Pennsylvania
Newer version with additional experiments published in NIPS 2016 Workshop on Intuitive Physics, Barcelona, Spain

Conference Papers (Oral) Workshop Papers Intuitive Physics Physics-based Simulation

Abstract

The physical behavior of moving fluids is highly complex, yet people are able to interact with them in their everyday lives with relative ease. To investigate how humans achieve this remarkable ability, the present study extended the classical water-pouring problem (Schwartz & Black, 1999) to examine how humans take into consideration physical properties of fluids (e.g., viscosity) and perceptual variables (e.g., volume) in a reasoning task. We found that humans do not rely on simple qualitative heuristics to reason about fluid dynamics. Instead, they rely on the perceived viscosity and fluid volume to make quantitative judgments. Computational results from a probabilistic simulation model can account for human sensitivity to hidden attributes, such as viscosity, and their performance on the water-pouring task. In contrast, non-simulation models based on statistical learning fail to fit human performance. The results in the present paper provide converging evidence supporting mental simulation in physical reasoning, in addition to developing a set of experimental conditions that rectify the dissociation between explicit prediction and tacit judgment through the use of mental simulation strategies.

Understanding Tools: Task-Oriented Object Modeling, Learning and Recognition

Yixin Zhu*, Yibiao Zhao*, Song-Chun Zhu.
* equal contributors

CVPR 2015, Boston, Massachusetts
Short version published in ICCV 2015 Workshop on Object Understanding for Interaction, Santiago, Chile

Conference Papers Workshop Papers Functional Object and Scene Understanding

Abstract

In this paper, we present a new framework for task-oriented object modeling, learning and recognition. The framework include: i) spatial decomposition of the object and 3D relations with the imagine human pose; ii) temporal pose sequence of human actions; iii) causal effects (physical quantities on the target object) produced by the object and action.

In this inferred representation, only the object is visible, and all other components are imagined "dark" matters. This framework subsumes other traditional problems, such as: (a) object recognition based on appearance and geometry; (b) action recognition based on poses; (c) object manipulation and affordance in robotics. We argue that objects, especially man-made objects, are designed for various tasks in a broad sense, and therefore it is natural to study them in a task-oriented framework.

Evaluating Human Cognition of Containing Relations with Physical Simulation


CogSci 2015, Pasadena, California

Conference Papers (Oral) Intuitive Physics

Abstract

Containers are ubiquitous in daily life. By container, we consider any physical object that can contain other objects, such as bowls, bottles, baskets, trash cans, refrigerators, etc. In this paper, we are interested in following questions: What is a container? Will an object contain another object? How many objects will a container hold? We study those problems by evaluating human cognition of containers and containing relations with physical simulation. In the experiments, we analyze human judgments with respect to results of physical simulation under different scenarios. We conclude that the physical simulation is a good approximation to the human cognition of container and containing relations.

Dr. Android and Mr. Hide: Fine-grained security policies on unmodified Android


Technical Reports of the Computer Science Department, University of Maryland

Technical Report Programming Language

Abstract

Google’s Android platform includes a permission model thatprotects access to sensitive capabilities, such as Internet ac-cess, GPS use, and telephony. We have found that Android’scurrent permissions are often overly broad, providing appswith more access than they truly require. This deviationfrom least privilege increases the threat from vulnerabili-ties and malware. To address this issue, we present a novelsystem that can replace existing platform permissions withfiner-grained ones. A key property of our approach is thatit runs today, on stock Android devices, requiring no plat-form modifications. Our solution is composed of two parts:Mr. Hide, which runs in a separate process on a device andprovides access to sensitive data as a service; and Dr. An-droid (Dalvik Rewriter for Android), a tool that transformsexisting Android apps to access sensitive resources via Mr.Hide rather than directly through the system. Together, Dr.Android and Mr. Hide can completely remove several ofan app’s existing permissions and replace them with finer-grained ones, leveraging the platform to provide completemediation for protected resources. We evaluated our ideason several popular, free Android apps. We found that we canreplace many commonly used “dangerous” permissions withfiner-grained permissions. Moreover, apps transformed touse these finer-grained permissions run largely as expected,with reasonable performance overhead.