October 2023

Back to Home.

Link to Chase's Index: Chase's Index.

Forays

October 16: Bayes' Theorem

I recently learned about a new way of thinking about probability. It started with a question: If one percent of a population has a certain medical condtion, what is the predictive value of a medical test which has 80% sensitivity (meaning it will result in a positive test 80% of the time with those who actually possess the condition) and 90% specificity(meaning it will result in a negative test 90% of the time with people who do not have the condition)? Put another way, what is the chance that a random person who tests positive with the test actually has the conditon? Intuitively you may assume that it may be 90% since the test is 90% specific, but the real answer is actually about 7.5%! The reason for this is due to the natural rate of occurance in the population being so low, 1%, and thus the probability of having the condition given you test positive is equivalent to quantity of the probability you test postive given you have it times the natural rate over the total probability of testing positive ([0.8]*[0.01])/(0.008+0.1*0.99) = 0.0748. Bayesian Inference Article

October 5: The Knowledge Machine

After finishing a book titled "The Knowledge Machine" by Michael Strevens, I realized something new about why modern science is so successful in contrast to the methods of ancient humans and even up to the Renaissance. Since the time of Aristotle, some would posit the existence of "explanatory relativism", whereby explanations in one culture or another are not simply implausible to other cultures, but entirely fail to be explanations at all. This idea is upheld by Thomas Khun's suggestion of paradigm shifts in science and is exemplified by the shift in thinking between Aristotle and Descartes as each posited a framework for how the universe fucntioned. Aristotle focused on how certain heavy elements like earth and water tended to seek equilibrium at the center of the universe while other heavenely elements like those that make up the heavenly bodies sought perfect in the motion of the circle, and thus orbited in circles. Descartes suggested that nothing could occur without direct contact, implying that tiny particles connect all of matter throughout space and their interactions are the reason for the connection between the planets and the sun. The problem with these explanations is illustrated by a post-script in Isaac Newton's Principia: “I have not as yet been able to deduce from phenomena the reason for these properties of gravity, and I do not feign hypotheses. For whatever is not deduced from the phenomena must be called a hypothesis; and hypotheses, whether metaphysical or physical, or based on occult qualities, or mechanical, have no place in experimental philosophy. . . . It is enough that gravity really exists and acts according to the laws that we have set forth and is sufficient to explain all the motions of the heavenly bodies and of our sea.” This idea of "shallow explanation" is something that distinguishes modern science from the others; in the past the idea was to give a reason for the behavior of the world that spanned a depth of understanding that was impossible to glean from the current data, while modern science only postulates immediate causal principles from observation and whatever can be logially deduced from those principles.

October 11: Log-Normal Distribution of Firing Rates in the Human Brain

After coming across a video describing the logarithmic nature of the brain, I was immediately interested in the underlying mechanisms that give rise to this behavior. Essentially if you consider a neuron in the brain and measure how often it fires and do this for all of the neurons in the brain, what would be the resulting distribution of firing rates? I had intuitively thought it would be normally distributed, but its turns out that the distribution is closer to a log-normal distribution, meaning that the logarithm of the random variable (in this case, neuronal firing rates) defined as the multiplication of random variables is normally distributed. What is interesting about this is that it means that about 10% of nuerons fire rapidly and account for about 50% of the information processing, while the other half is controlled by the other 90% of slower firing neurons. A potential reason for this is the division of labor of processing, whereby a minority of nuerons fire rapidly and have strong connections, serving as generalizers to give you an idea of the general aspects of reality while the majority of neurons would be analogous to specialists. It turns out that this organization of neural architecture is actually more efficent in terms of energy consumption and more robust to noise and failure of individual components. The reason for the mutiplicative nature of the brain is still ongoing research, but it could be due to the spine sizes on a neuron's membrane being proportial to their size and also related to the strength of a synapse. My question is how could this efficient architecture arise through evolutionary processes and what does this mean for brain-size vs intelligence in terms of selection?