Are you using those in your price models?Maybe in the future. One of my coworkers is using Daubechies wavelets in his model; compared with ordinary moving-average smoothing, doing a wavelet transform, cutting off the spectrum, and then transforming back seems to offer better noise reduction with less information. You still need a power-of-two data length, like for fast Fourier transforms.
Recently, I have been using techniques from "robust statistics": medians, trimmed data, etc. This reminded me of our early experiences with median filtering. Now I think that, while the technique may be nonlinear, the point of experimental science is to make a reliable measurement. If the number itself is what we want to measure (or even ordinary arithmetic transformations thereof), then I say filter. I think the only case where I wouldn't automatically reach for a trimmed mean (where the top and bottom x% of observations are excluded before taking the mean) is if I were doing transformation to a different space, such as FFT. But for dynamic pulling experiments, I suspect FFT is a rarely-used technique.
You probably know that the reporting of a standard deviation is only meaningful if the data are close to normal. Especially in the kind of data I have, I am plagued by fat-tailed distributions, and find that ordinary means and variances are too sensitive to outliers. The robust alternative is the MAD (median absolute deviation).