The pose of an articulated machine includes the position and orientation of not only the machine base (e.g., tracks or wheels), but also its major articulated components (e.g., stick and bucket). To automatically estimate this pose is a crucial component of technical innovations aimed at improving both safety and productivity in many construction tasks. Based on computer vision, an automatic observation and analysis platform using a network of cameras and markers is designed to enable such a capability for articulated machines. To model such a complex system, a theoretical framework termed camera marker network is proposed. A graph abstraction of such a network is developed to both systematically manage observations and constraints, and efficiently find the optimal solution. An uncertainty analysis without time-consuming simulation enables optimization of network configurations to reduce estimation uncertainty, leading to several empirical rules for better camera calibration and pose estimation. Through extensive uncertainty analyses and field experiments, this approach is shown to achieve centimeter level bucket depth tracking accuracy from as far as 15 m away with only two ordinary cameras (1.1 megapixels each) and a few markers, providing a flexible and cost-efficient alternative to other commercial products that use infrastructure dependent sensors like GPS. A working prototype has been tested on several active construction sites confirming the method’s effectiveness.