We present GazeConduits, a calibration-free ad-hoc mobile device setup that enables users to collaboratively interact with tablets, other users, and content in a cross-device setting using gaze and touch input. GazeConduits leverages recently pre- sented phone capabilities to detect facial features and estimate users’ gaze directions. To join a collaborative setting, users place one or more tablets onto a shared table and position their phone in the center, which then tracks present users as well as their gaze direction to predict the tablets they look at. Using GazeConduits, we demonstrate a series of tech- niques for collaborative interaction across mobile devices for content selection and manipulation. Our evaluation with 20 simultaneous tablets on a table showed that GazeConduits can reliably identify at which tablet or at which collaborator a user is looking, enabling a rich set of interaction techniques.