Saturday, January 24, 2015

Stats Status

Back in high school, when the type of physics that was blowing my mind was deterministic equations of motion, I thought statistics was pretty useless and never studied it.  Why study fuzzy averages when you can know perfectly what will happen, as in the simple physics of a fired cannonball?

The more I learn, the more it becomes clear that you usually don't have all the information you need, or you have too much, or you have the wrong stuff, and statistics seems to be the tool to use to get what info you want.
--> Say you want to buy an airline ticket to Asia and your wallet could use a deal, but you don't really care to look all that much.  So you check once a week for three weeks to get a sense of the prices, then buy the next time it looks good.  Subconsciously, you're taking limited data points and extrapolating to some idea of a distribution of ticket prices, using past knowledge of how airline tickets seem to go in the weeks leading up to the flight date.  Then you do some statistical reasoning (also subconsciously, most likely) and compare your confidence in your model with how much searching you care to do, and when they meet you're ready to buy.
--> Or, say you find a data table in the Chicago Maroon with the time it took for the last 100 physics graduates to get their PhDs from UChicago, but that's way too much information!  You really just want the average, a measure of the spread, and maybe a measure (a higher order moment of the distribution) to tell about how symmetric the graduation times are about the average, so you boil down the unmanageable set of 100 data points into two or three numbers that your brain can handle.

In general, it seems statistics is a way of processing information into manageable terms.  The world is a noisy place where everything interesting has a spread on its values -- nothing is totally exact, nothing is precisely the same every time or every where -- and statistics is a tool to make sense of all of these uncertainties or spreads or variability or noise.  This perspective seems neat to me; it's like statistics as a field has been developed in response to how rich the world is.  How boring would things be if everything were known with certainty -- if every plane ticket were $2000, every physicist took 6 years to graduate, every blog post consisted of 500 words, and every split pea soup contained 3000 split peas (I guess meaning 1500 unsplit peas,,,?  That's assuming peas always split into two parts, but in reality pea splitting also follows a statistical distribution and hides some amount of hidden complexity!)?  Instead, every day is unique, and while your statistically predictive subconscious brain may have an idea of what to expect when you go to work, there are too many moving parts to predict exactly what will happen, and so every day is an adventure (some days more than others of course).

In my research with granular materials, hundreds or thousands of particles communicate through local interactions (a fancy-sounding way of saying they can only push on their immediate neighbors,  but then those neighbors push on their neighbors and so on, so you still get long-scale effects and complex "emergent" (=sum is greater than the parts) behavior).  Knowing each of these interactions is possible with computer simulations that we use, but not at all useful unless you find statistical averages to get some sort of macroscopic (large scale) measure, such as the global packing fraction (how well the particles fill space) or the stress distribution among particles (how forces propagate from one part of the system to another).  At the same time, though, the system size is small enough that some of those averages are so rough they border on being meaningless, such as when you ask about force chains, which are these long sequences of particles that pass on high forces to one another (rather than spreading it out among all their neighbors, as you might suppose would always be the case).  So in my work, while the small scale is fairly simple and intuitive (just shapes pushing on each other), understanding the large scale behavior (what the thousand particles will do together, whether it's avalanching or crunching together when you step on them or just filling a container) involves trying to boil down, statistically, a bunch of randomized interactions and pull out a useful average-type-behavior.  And the more I learn the more I see how nontrivial that is, and how useful it is to look at it through statistical goggles.