/
Cyber Resilience

The Limits of Working Memory: Human Brains vs. AI Models

AI brain

In computer science, the "working set" refers to the amount of data a processor can handle at one time to solve problems. This idea has an interesting parallel in human cognition, known as "working memory." Working memory is like a mental scratchpad where we hold information temporarily. It is essential for reasoning and making complex decisions.

However, working memory has a limited capacity which can limit our ability to tackle very complicated problems. This insight is important as we move into a world increasingly influenced by AI, especially in fields like cybersecurity.  

A brain's bandwidth  

Cognitive science has different tests to measure a person's working memory capacity. These tests show that there is a consistent upper limit, even for people who are exceptional thinkers.

For example, chess masters, mathematicians, and top musicians have amazing skills, but their working memory capacity is probably not much different from the average person's. It seems that evolution has fine-tuned our brains for a certain level of cognitive ability. This can make it hard for us to fully understand very large, complex problems.

The AI advantage: Scaling the context window (with caveats)  

Artificial intelligence systems, especially Large Language Models (LLMs), have a limitation called their "context window." This is the number of tokens (words or code parts) they can handle at once. Unlike humans, whose working memory is fixed, an AI's context window can be expanded, though it is expensive. More GPUs, better algorithms, or new hardware can all increase an AI's capacity.

However, this approach has important limits. At first, tasks like finding anomalies in huge, complex datasets seem perfect for LLMs. Spending lots of money on AI processing time might seem worth it for these insights.

But it's important to know that anomaly detection often fits better with traditional machine learning (ML) and statistics than with advanced neural networks and deep learning. This is a good thing: Traditional methods are usually more predictable and reliable than some AI applications.

Yet, a bigger problem exists. Today's AI models and uses are still very flawed. Examples like self-driving cars failing and LLMs making convincing but wrong "hallucinations" remind us of this. Before using AI widely to solve big problems, we must make sure these systems are reliable and consistently correct.  

Finding the right tools for the right job

So, where does this leave us? The key is to understand that human brains and AI models have different strengths. Humans, with our fixed cognitive power, are good at solving detailed problems within the limits of our working memory. We usually choose problems that we have a good chance of solving by ourselves.

AI models, on the other hand, can be scaled up for problems that would be too much for a human mind. But this approach only makes sense when:

  • The financial value of the solution is higher than the cost of computation.
  • Trust in the accuracy of the result is crucial.

The high cost of processing complex datasets, along with current limits on explainability and correctness, means that the problems we can solve with today's AI are still limited. Their best use is currently to reduce scope and complexity to a smaller set of problems approachable through human cognition.  

AI cybersecurity

Integrating AI in cybersecurity: A strategic approach

In cybersecurity, the goal is not just to fight AI-generated threats with more AI, but to use AI wisely in security protocols. Security teams need to be smart about how they invest in AI, making sure it enhances their abilities without creating new weaknesses.

Illumio is doing this with AI cybersecurity capabilities like the Illumio Virtual Advisor which offers conversational answers to questions about setting up and using Zero Trust Segmentation. And the new AI-powered labeling engine gives instant visibility into assets in data centers and cloud environments to help organizations adopt Zero Trust Segmentation faster.  

These solutions use AI to analyze large amounts of data while keeping human oversight central to security decision-making. By combining humans' dynamic and adaptive brains with AI's processing power, security teams can better handle the complex world of modern threats.

Will AI exponentially reduce costs and improve trust?

Could a Moore's Law-like effect drastically lower the cost of high-performance AI over time? This is still uncertain. AI progress depends not only on hardware but also on better algorithms, improved datasets, and advances in making AI results reliable and trustworthy.

That’s why it’s vital to understand the limits and costs associated with both human working memory and AI context windows. Choosing the right tool, whether it's the human brain or a machine, requires a clear grasp of the problem's size, its potential value, the costs of finding a solution, and how reliable the results are. In cybersecurity, this means strategically using AI to boost human abilities, ensuring that the mix of human intuition and machine power is based on growing trust in AI solutions.

Looking ahead, the future will likely involve a strong partnership between human thinking and artificial intelligence, with each supporting the other in facing the challenges of a more complicated world.

Want to learn more about the Illumio Zero Trust Segmentation Platform AI offerings? Contact us today.

Related topics

No items found.

Related articles

Operationalizing Zero Trust – Step 6: Validate, Implement and Monitor
Cyber Resilience

Operationalizing Zero Trust – Step 6: Validate, Implement and Monitor

Learn about an important step on your organization’s Zero Trust journey: Validate, Implement and Monitor.

Did Cybersecurity Predictions for 2023 Come True? Here’s What We Found
Cyber Resilience

Did Cybersecurity Predictions for 2023 Come True? Here’s What We Found

Learn how 3 key predictions for the cybersecurity industry in 2023 played out this year.

Out of Sight, Out of Mind: The Dangers of Ignoring Cloud Visibility
Cyber Resilience

Out of Sight, Out of Mind: The Dangers of Ignoring Cloud Visibility

Understand why traditional visibility approaches aren't working anymore and how to get complete visibility into your hybrid, multi-cloud environments.

8 Questions CISOs Should Be Asking About AI
Cyber Resilience

8 Questions CISOs Should Be Asking About AI

Discover 8 questions CISOS must consider when protecting their organizations from AI-assisted ransomware attacks. This is a must-read.

Go Back to Security Basics to Prepare for AI Risks
Cyber Resilience

Go Back to Security Basics to Prepare for AI Risks

Get two cybersecurity experts' views on how AI works, where its vulnerabilities lie, and how security leaders can combat against its impact.

AI Shouldn’t Be Trusted: Why Understanding That Can Be Transformative
Cyber Resilience

AI Shouldn’t Be Trusted: Why Understanding That Can Be Transformative

Learn why Illumio's CTO and co-founder believes the AI "tech boundary" is smaller than it appears – and how that informs the ways we use AI.

Assume Breach.
Minimize Impact.
Increase Resilience.

Ready to learn more about Zero Trust Segmentation?