The LMS Method
Written by: Morgan Ellis
One of the biggest challenges in statistics is dealing with non-normal data. Non-normal data is any dataset that doesn’t follow a smooth, symmetrical pattern—instead, it’s irregular, skewed, or otherwise unevenly distributed. And non-normal data is everywhere, particularly when we deal with real-world measurements—like human growth patterns across age and sex.
Thankfully, statisticians today can stand on the shoulders of giants by using techniques that have reshaped the world of statistical analysis. And in a world where there is no shortage of non-normal data, there is one method that has become relied on by so many: LMS.
LMS parameters
LMS, which stands for lambda (λ) mu (μ) sigma (σ), is a clever way of reshaping messy data into something that is more manageable for statisticians to work with. The LMS method is based on the idea that any distribution can be transformed to approximate a normal distribution using three key parameters—lambda, mu, and sigma.
Lambda represents the data's skewness–it measures the degree of asymmetry in the data. Mu represents the median of the distribution—it measures the data's central tendency. And sigma represents the coefficient of variation—it measures the spread of the data relative to the median. The LMS method ultimately assumes that, after applying the lambda transformation, the data follows a normal distribution with mean mu and standard deviation sigma.
With this approach, the LMS method can take even the most irregular datasets and transform them into forms that behave much more like normal distributions. The LMS transformation is especially powerful because it makes calculating standard deviations and percentiles relatively trivial once all of the heavy lifting has been done. LMS is a brilliant technique that bridges the gap between the unpredictable world and the interpretable world.
Origin of LMS
The LMS method was invented by Tim Cole, a Professor of Medical Statistics from the University College London, in the 1990s. He invented it while studying childhood obesity, and formally introduced it to the world in 2000 when he published his findings in the British Medical Journal. That study has since become a seminal work in the field.
At the time, BMI (body-mass index) was widely accepted for assessing childhood growth, but Cole contended that BMI needed to be standardized. He was particularly concerned at the lack of consensus on how to adjust for age. It was to solve this problem that he developed the LMS method. A method that would revolutionize both the analytical side of healthcare and the world of statistics. The LMS method would forever change the way professional statisticians and healthcare practitioners look at real-world measurements—all thanks to Professor Cole.
LMS drawbacks
LMS, however, does come with its disadvantages—there are no free lunches in statistics, either.
Most glaringly, it's complex. Estimating lambda, mu, and sigma as smooth functions involves advanced statistical modeling, which requires specialized software and technical expertise. This can significantly increase the learning curve for a project, making the method less accessible for routine or small-scale analyses. Performing these tasks is also computationally intensive, often requiring high-performance computing resources and significant investment of both time and capital.
Another disadvantage of the LMS method is its assumption of normality. LMS relies on the premise that, after transformation, the data follows a normal distribution. However, this assumption may start to break down in highly skewed datasets and mar the analysis with non-trivial amounts of bias. The LMS method, because of its reliance on lambda, is sensitive to outliers, and even a relatively small number of extreme values can disproportionately affect its estimation and distort the entire transformation.
Finally, LMS can be hard to interpret. While it often delivers accurate results, the transformations it applies—especially those involving lambda—can make the parameters less intuitive. This lack of clarity can make it harder to explain the findings to non-technical audiences or to compare them with results from simpler methods.
Adoption of LMS
Yet despite these challenges, LMS has become an indispensable tool. Beyond its role in analyzing childhood obesity and powering height calculators like Tall or nah, it is widely used in fields where precise measurement and comparison are critical. It is a mainstay in growth and anthropometric studies, where researchers and health professionals use it to create reference charts for height, weight, and other body measurements across age and sex. It also sees use in epidemiology, public health, and medical research, where standardized percentiles are essential for screening, diagnosis, and tracking trends over time.
That said, LMS isn’t a universal tool. It’s generally avoided in situations where data is sparse, heavily multimodal, or when a simpler approach is sufficient. Because it assumes that the transformed data will approximate normality, it’s less suitable for datasets with extreme irregularities or small sample sizes that make robust estimation of lambda, mu, and sigma impractical. In those cases, non-parametric or alternative modeling methods are often a better fit.
In summary, the LMS method offers a powerful solution for making sense of complex, non-normal data—a challenge statisticians and researchers face across many fields. By transforming irregular datasets into a more manageable form, LMS enables clearer interpretation, meaningful comparisons, and practical applications in healthcare, epidemiology, and beyond. Its influence extends even to everyday tools, like Tall or nah. While the method comes with certain complexities and limitations, its impact on growth measurement and public health research remains undeniable. Thanks to the work of Tim Cole and others, LMS continues to bridge the gap between messy real-world data and actionable insights.