Keyframe-based motion synthesis hold significant effects in games and movies. Existing methods for complex motion synthesis often produce foot sliding, which results in low quality movements. In this paper, we analyze the cause of the sliding issue attributed to the mismatch between root trajectory and motion postures. To address the problem, we propose a novel spatial-temporal transformer network conditioned on foot contact information for keyframe-based motion synthesis. Specifically, our model mainly compromises a spatial-temporal transformer encoder and two decoders for learning motion sequence features and predicting motion postures and foot contact states. A novel mixed embedding, which consists of keyframes and foot contact constraints, is incorporated into the model to facilitate network learning from diversified control knowledge. To generate matched root trajectory with motion postures, we design a differentiable root trajectory reconstruction algorithm to construct root trajectory based on the decoder outputs. Qualitative and quantitative experiments on the public LaFAN1, Dance, and Martial Arts datasets demonstrate the superiority of our method in generating complex motion synthesis. It can satisfactorily address the foot sliding problem compared with the existing most advanced methods.