Pre-trained Language Models (PLMs) have been one of the fundamental components of natural language processing techniques over the past few years, and have proven their efficacy across a wide range of applications. In the clinical field, researchers have created domain-specific PLMs for improved performance on NLP tasks in the domain. In this report, we present a comprehensive examination of the clinical PLMs. More specifically, we start with a brief overview of foundational concepts of language modeling, including architectures, data sources, training methods, and more. We then introduce a list of current clinical PLMs and discuss all the models and downstream tasks in the domain. In the end, we also highlight limitations and potential future directions in the field.