Are Our Brains Wired to Think Alike?

January 9, 2024

What aspects of our brain's functioning are innately programmed? Discussing innate brain functions, perceptual narrowing, numerical perception, visual word form area, and the evolution versus experience in brain functionality.

Transcript

Hello! I’m Leo Isikdogan, and welcome to the Cognitive Creations podcast.

Today, we're talking about the similarities in our brains, including which aspects of brain function are innately programmed, how our perception evolves over time, and how reading is even possible. We will discuss innate brain functions, inductive biases, perceptual narrowing, numerical perception, and more. Alright, let's get started.

Synchronicity in Scientific Discovery

During my PhD, I noticed something curious. Whenever I came up with a novel solution to a research problem and submitted a paper, I noticed others around the world had just published very similar work, almost in synchrony with my submission. This got me thinking. Do our brains naturally follow similar paths? Maybe the human brain, no matter where we're from or what we've experienced, tends to come up with similar ideas and inventions given the same kind of information?

This wasn't just a thing in academia. I'm not publishing papers anymore, but I still see it. I often come up with what I think is a new method or idea. But then, maybe a few months later, I see someone, somewhere else discussing something very similar online. And it's happening faster now. The time frame for this to happen is getting shorter and shorter as more people start working on the problems I work on. It's pretty wild. Maybe our brains are inherently wired to think along similar lines, and they converge towards similar ideas, discoveries, and inventions, given similar inputs.

Here's another twist. If a year goes by and I don't see anyone else publishing my idea, I start to think maybe it doesn't actually work. Chances are it indeed does not. In the cases where I've actually tried some of those ideas out, they usually don't work well in practice. It's like there's a silent agreement out there. If we're not publishing it or talking about it, then maybe it's not a good idea after all.

Publication Bias

This also leads to an interesting point about what gets published and what doesn’t. A lot of research doesn't get published, especially studies with results that are less significant or unexpected. Negative results are hard to publish, as no one seems to be excited about them. This is also known as the publication bias.

So, just because I don't see my idea out there, doesn't mean no one else has thought of it. Perhaps they tried it, and couldn't get it to work or didn't get the results they hoped for, so it never made it to publication.

In a world with billions of brains, if we have enough people working in the same domain, it's almost a given that multiple people will end up having similar thought processes. This isn't just a coincidence. It shows how our brains, though different in experiences and views, have a common way of thinking and creating ideas.

Innate Brain Functions

So, this leads us to an interesting question: What aspects of our brain's functioning are innately programmed, and how does this shape our collective thinking?

Certain brain structures must inherently be similar in everyone, or how else could our thought processes align so closely? But first, what does innate really mean? It's not necessarily something present at birth, but rather something determined at birth, independent of experience.

Babies can recognize the identity of a face across different angles shortly after they are born. This doesn’t seem like something they learn, but rather something they are born with. Obviously, they learn to recognize individual faces, like their own mother's, rather than some other random person's, but, the ability to recognize faces itself doesn’t seem like something they have learned.

This natural ability is remarkably efficient compared to some artificial neural networks, which often require exposure to huge amounts of data when trained from scratch. Unlike these systems, we humans don’t need to see hundreds of thousands of faces to start recognizing them.

Inductive Biases

Humans and many other animals are very efficient learners. We use inductive biases that make learning very efficient. Inductive biases are the set of assumptions that guide how we process information, form patterns, and make decisions. They help us generalize from experiences without having to observe a countless number of examples.

A classic example of an inductive bias is Occam's razor. It's a simple idea: when there are two competing hypotheses that make the same predictions, the simpler one is usually better. This is not always true, but in general, it's a useful principle in many cases including machine learning.

We, humans, have a general understanding of how the world works baked into our brain. The human brain is wired to recognize patterns. This inductive bias allows us to make sense of the world by identifying regularities and making predictions based on them.

Understanding inductive biases is also crucial in designing robust AI models that generalize well.

AI architectures have their own inductive biases. For example, convolutional neural networks assume that a feature useful in one part of an image will likely be useful in other parts too. So, they share the same weights across the entire input. They also assume that the world is compositional, meaning that complex patterns can be understood by combining simpler ones. Convolutional neural networks are loosely inspired by the human visual system, although they are vastly different in many ways.

For certain tasks, our brains seem to be already primed for 'few-shot learning’, learning from very small amounts of data, typically only a few samples. It's as if we're born with a built-in model for recognizing certain types of inputs.

This concept isn't new in hardware technology. We see similar approaches in designs of processors, neural network accelerators, and systems on a chip in general. These chips have specialized parts for specific functions. For example, some components are dedicated to video encoding and decoding. Some components are designed to run common types of operations in neural networks as fast and efficiently as possible.

But anyway, I digress. Let's get back to our discussion about the human brain.

Perceptual Narrowing

There's this notion of 'perceptual narrowing', which is about how our brains specialize as they develop. We tend to think that our perception broadens as we learn more things, but the concept of perceptual narrowing is actually about the opposite. It's like our minds start off with a broader, more universal toolkit, capable of processing a wide range of stimuli. But as we grow, our brains 'narrow down' this toolkit, focusing on the stimuli that are most relevant to our day-to-day lives.

Human babies, for example, are born with the capability to detect many different types of stimuli. When they're born, they can pick up on a lot of different things. They can tell apart sounds from foreign languages and recognize all sorts of faces, even non-human faces. But as they grow, they begin to concentrate on specific perceptions. They narrow down their perception towards what’s more relevant socially and culturally in their environment. For example, they become better at recognizing faces they are accustomed to seeing but worse at recognizing those from ethnic groups that are not familiar to them. Similarly, their ability to discern foreign language sounds diminishes, as they tune more into the phonetics of their native language.

Essentially, as we develop, our brains optimize for efficiency by focusing on the information most useful for our environment, while sacrificing some of the broader perceptual abilities we had as infants.

Numerical Perception

I think another interesting inborn trait to explore is our brain’s natural ability to comprehend numbers. Our brains come equipped with a biologically determined, domain-specific ability to perceive and process quantities. And this is a trait that's not only found in humans but also in some animals.

This innate numerical ability, along with other core brain functions, influences how we perceive and make sense of the world. It forms a common basis for quantifying and interpreting our surroundings.

Ability to Read and Visual Word Form Area

We see that our brains seem to have built-in domain-specific knowledge, shaped by evolutionary forces. This raises a key question: how much of our brain's functioning is a product of evolution, and how much is shaped by our experiences?

Take reading, for example. Humans have only been reading for a few thousand years, a blip in the timeline of evolution.

This obviously isn't nearly enough time for our brains to have evolved a specific circuit dedicated to reading. Yet, studies have found areas in the brain, like the 'visual word form area', which are selectively responsive to visually presented words or letters.

This area is adjacent to regions involved in processing visual information about faces and objects, which aligns with its role in deciphering complex visual stimuli. It is also interconnected with other brain regions involved in language processing. It might be that our brains have repurposed pre-existing neural circuits, initially developed for other functions, to accommodate new tasks like reading. Interesting stuff!

Alright, that brings us to the end of this episode. Let me know what you think? How ironic I ask that. Given how our brains are so alike, it's like I'm just asking myself for a second opinion!

Thanks for listening and I’ll see you next time.