Thessalonians 5:21 ““Test everything; hold fast what is good.”
When I learned programming AI in 1998, AI wasn’t magic. It wasn’t robots taking over the world. It was pattern recognition, statistics, and probability.
That’s it. No Magic.
Artificial intelligence tried to imitate something humans had already been doing for millions of years.
Imagine a group of early humans standing on a rocky hill. A tiger appears. Chaos ! People shout. Throw things. Swing sticks. Panic. Experiment. Eventually someone throws a stone and the tiger backs off.
Stone gets a mental plus sign.
Next day a black panther appears. It’s not a tiger, but it looks similar. The brain searches its internal statistics: “Last time: big striped predator → stones worked.” So we try the stones first.
That is intelligence in its raw form:
- Recognize patterns.
- Reuse what worked before in similar situations.
- Adjust when it fails, try other solutions.
- Share the solution with the group.
Humans became dominant not because we are the strongest, but because we are excellent at finding, copying, communicating, and improving solutions together. Groups that shared successful patterns survived more often. Those traits multiplied and became dominant traits.
AI systems copy that principle.
What a Large Language Model Actually Does
A Large Language Model is trained on enormous amounts of text to learn patterns. Under the hood, it calculates probabilities.
Given a sentence, it predicts the most likely next word.
Then the next.
Then the next.
It doesn’t “understand” a tiger.
It predicts tiger-related words based on statistical patterns.
It imitates intelligence by choosing the most probable next step.
But here is the key difference:
An LLM does not track its own long-term statistics during your conversation.
It does not independently discover new problems.
It does not improve itself in real time.
That’s why systems add feedback buttons. Human ratings are collected, and later the model is retrained. The learning happens outside the conversation, not inside it.
My Simple 2-Out-Of-3 Strategy
In practical programming, I used a very simple decision system.
Imagine a process that can choose between Solution A and Solution B.
First run: A works.
Score: AAA.
Next time A fails once. So we test B.
Score: AAB.
Two out of three are still A → choose A.
If failures continue, patterns shift:
ABA
ABB
At some point B becomes statistically stronger.
This is not fancy AI.
It is structured probability with memory.
Keep score.
Pick the majority.
Adjust when trends imply a permanent change.
Simple. Effective.
Real-World Example: Matching 1,800 Cost Centers

I once had to match:
- 1,800 cost centers (each with 6 attributes)
- 1,500 bank statements (each with 5 attributes)
In theory, that’s up to 8,100,000 match operations.
Brute force works.
But it’s slow.
After one full run, patterns appeared:
- 600 matches happened between Cost Feature K1 and Bank Feature B1.
- 300 matches between K1 and B2.
- And so on.
- an estimated 95% will remain the same next run
So what do you do?
You start next time with the most statistically successful pairing: K1-B1.
If 95% of those 600 matches repeat reliably, you instantly eliminate thousands of unnecessary comparisons. Each direct rematch saves up to (5×6) 30 match operations on 1.800 cost centers, 54.000 matching operations. Then you take the second most succesful matching pair (K1-B2) and only have to search 1200 costcenters, as 600 were matched in the first batch. That is search space minimization, an old technique.
“Smart search.”
Not by thinking harder.
By reusing stored success.
Across the whole dataset, that eliminated roughly 95-98% of the search effort. If nothing changed, you don’t have to search for a solution, you already have it.
That’s also the coder’s creed : DRY ~ Don’t Repeat Yourself. Reuse the solution parttern, code and write (standard) functions.
Efficiency is often just memory applied as well. Same as with dynamic programming.
ISO 9001 and the Military Mindset
Interestingly, quality standards like ISO 9001 are built on similar logic. The framework traces back to military standards developed in the United States to keep massive operations running reliably.
When survival depends on logistics, discipline, and coordination, and covering each others back, you:
- Define procedures.
- Standardize patterns.
- Measure everything.
- Improve continuously.
That is our evolutionary survival method, our ‘intelligence’ operation, translated into management.
Define the pattern. Spot the problem, classify, test a solution pattern.
Execute it.
Measure it.
Change it or Improve it.
Over and over.
The difference is urgency.
In the military, failure can mean life or death. That is the ‘tiger’, right there. In companies, the pressure is usually financial rather than existential. That changes behavior and mentality.
Discipline Is Learned, Not Installed
Young recruits entering the military often lack discipline, coordination skills, a sense of responsibility, and structured communication habits. The first phase is not combat training. It is behavioral standardization.
Same with sailing, you have to get ‘onboarded’, and get used to a new rythm, a new way of life, learn your tasks and responsibilities, learn the terms and ‘command verbs’. Once you know the basics, you can work on any ship with any colleague.
Companies experience something similar.
New employees need time to learn:
- Structured communication
- Process discipline
- Responsibilities
- Documentation habits
Even something simple like riding a bicycle required structured training before you learned to signal your direction. But you need to learn the ‘rules’ to get ahead safely in traffic.
Communication is not automatic.
It is trained.
And intelligent systems, whether human organizations or AI models, only perform well when patterns are clearly defined and feedback loops are active.
The Big Picture
From early humans throwing stones…
To statistical matching algorithms…
To large language models…
To ISO standards…
The core principles remain the same:
Recognize patterns.
Track success.
Reuse what works.
Measure outcomes.
Adjust continuously.
AI did not invent this logic.
It mimicks our intelligence.
And in many ways, we are still just very organized stone throwers, just armed with better statistics and better stones.

