Research in the Visual Attention Lab centers on the problem of visual search: How do we find what we are looking for? In the medical setting, for example, we study how the properties of the human “search engine” constrain the ability of radiologists to screen for breast or lung cancer. Basic questions of current interest include: (1) How do we search for multiple things at the same time? and (2) How do you know when a search is done, whether it is a search for all of your child’s socks or all of your patient’s cancer? (3) How can humans and AI (e.g. deep learning) agents interact in a manner that makes the combination better than either human or AI alone? (4) Why do people miss clearly visible items? How can we reduce false negative errors in search tasks?