Mobile robots rely on state estimation and mapping to perceive, plan, and navigate in the real-world, but they face significant challenges when operating in unstructured environments. Factors such as low lighting, the presence of particulates (e.g., dust or fog), uneven terrain, and other environmental disturbances can severely affect their ability to localize and map its surrounding environment—leading to errors in the robot’s movements, inability to complete tasks, and potentially even causing damage to itself or its surroundings. While current state-of-the-art algorithms may work well in controlled environments, they quickly break down in unstructured scenarios due to their brittle architecture, strong environmental assumptions, and high computational complexity, limiting their applicability to the real-world. To this end, this research aims to address these limitations and proposes several novel methods for fast and robust, domain-agnostic geometric perception through innovative algorithmic design grounded by first principles. Towards this, through three novel algorithms which leverage precise depth measurements from LiDAR technology, we propose several unique algorithmic innovations which increase localization accuracy, computational speed, and overall operational reliability. The first algorithm introduces a lightweight LiDAR odometry (LO) solution that enables the use of dense point clouds for fast and accurate localization with an adaptive keyframing approach and data structure recycling. The second proposes a new condensed LiDAR-inertial odometry (LIO) architecture with a fast coarse-to-fine method for continuous-time motion correction, providing a technique for parallelizable point-wise deskewing with a constant jerk and angular acceleration motion model. In the third, we present a robust LiDAR SLAM algorithm that prioritizes operational reliability and real-world efficacy by strategically placing proactive safe-guards against common failure points in both the front-end and back-end subsystems. The perspectives gained from this dissertation provide better insight into developing a general-purpose perception framework for autonomous mobile robots operating in a diverse set of environments in-the-wild.