GenAI In The Loop
- Leon Como

- Jul 14, 2025
- 3 min read
The viable architecture for intelligent systems

GenAI is not a bubble like the dot-com boom.
But the current trend is way more worrying than one should worry about bursting any bubble.
Systemic collapse will be a lot harder to recover from and it appears we are running a race and actively building systems on many fronts that are prone to such a collapse.
This is because we are mainly fixated on value capturing optimization that outpaces value generation and consumption by a wide margin which fits the dreaded definition of entropy or insidious decay.
This paper that I am sharing is far from being perfect, but it is accurately reflective of needed pivot from what we are currently investing so much time, money and efforts on.
Here's the chain of prompts I used:
1) Intelligence system of many countries, particularly the technically optimized ones are hyper-sophisticated at their core, but their tentacles can be messy. The algorithm can still make costly mistakes including how it reads the return signals from unconventional and emergent entities.
2) We seem to be in agreement. But most of the examples you used are not failure of intelligence but intentional baits.
3) Yes, therefore we can declare:
GITL is the usable architecture. AGI and relative variants are tar pits.
Perhaps soon, scientists will have a name for the insurmountable barrier between what makes a human mind work and the replications being attempted into machines. But it's more likely that we may not be able to fully figure it out. Yet even if we are able to replicate and optimize only about 20% of human intelligence, the use cases will be immensely powerful for many human pursuits in an optimized GITL.
4) Please take all the important points in this thread then write the paper.
Include a list of DSE references at the end.
Also list down the novel insights not linked to any prior reference.
DSE = distillations, syntheses and extrapolations.
Use simple listing for the citations - no preferred format. My thoughts on the generated paper:
1. Where it alludes into having companion, colleague, copilot or assistant, it should be "GenAI system" to mitigate anthropomorphism.
2. RLHF must never be considered as foolproof and always generative. It can be degenerative due to human inconsistencies and obsolescence of choices.
3. We must differentiate computed predictions with rationally intuitive sensemaking.
4. Add the biggest barrier to AGI - the encompassing and harmonized human senses that can tap to universal vibe. It will be very expensive for AI compute to reasonably mimic these unexplainable human abilities.
5. The codified goals and rewards systems needs regular review and upgrades/ update because these are prone to drift and inevitable collapse which is the same for humans, but humans have innate autocorrect and coping mechanisms.
6. Consider that any over-optimization tends to be inevitably eroded by overfitting.
7. AI might be able to pick the best choice out of millions in seconds but to decide and be accountable for using that choice will be costly for AI while it's a relatively cheap and quick intuitive pick for humans
8. The real future with GenAI is to have a system that generates more value and put more humans in meaningful roles - not a system that is unreasonably optimized to need less humans for value capture in the guise of seemingly noble but misguided pursuits.





Comments