Update
New Repository: entity-translation-protocol
I just published entity-translation-protocol, an
open-source protocol and prompt-driven workflow for large-scale
proper-name and entity translation. It originally grew out of fixing
MLB player name translations in Simplified Chinese, then expanded
into a reusable system for multilingual person-name translation and
other entity-heavy localization work.
The repository includes protocol docs, examples, prompt packs, a
machine-readable manifest and schema, and workflow templates. The
goal is to make entity translation more structured, auditable, and
scalable across many languages and domains.
Published April 14, 2026
Note
Why I No Longer Think Symbols Alone Are Enough
I used to think that sufficiently strong symbolic or language-based
learning would eventually produce deep world understanding on its own.
Over time, I changed my mind.
Coming from a scientific background, I became more aware that real
understanding often depends on organizing noisy, continuous phenomena
into stable objects, variables, and causal relations before formal
abstraction becomes useful. When I started thinking seriously about
AI, I found the same pattern compelling in human development:
children seem to build understanding from grounded perception and
interaction first, not from symbols alone.
What changed my mind was seeing how often strong pattern completion
did not translate into robust causal intuition, stable structure, or
transfer across changes in representation. That made me think that
grounded world modeling is not just an optional module, but may be
one of the central requirements for intelligence. I now believe that
abstract reasoning probably does not fully substitute for a
developmental path through perception, structure, and world
modeling.
Published April 4, 2026
Note
What AI Changes in Scientific Workflows
I have been thinking about world models in a more developmental way:
not as systems that simply produce plausible outputs, but as systems
that gradually build grounded understanding. My intuition is that if
AI is going to matter deeply for science, it may need to learn in
layers, starting from perceptual structure and moving toward causal,
mathematical, and eventually scientific understanding. What interests
me most is the idea that intelligence may require not just scale, but
a developmental path.
Published March 30, 2026
Update
My New Website Is Live
I wanted this new site to do more than collect publications and
project summaries. It is also a place to keep track of shorter notes,
ongoing questions, and ideas that are still forming. I expect this
page to grow slowly over time as I keep writing about science, AI,
and the ways they increasingly shape each other.
Published March 30, 2026