Programming Language for the AI Age: A Summary of Matz's Keynote from Rubyconf Taiwan 2025
A photo of Matz's Keynote in RubyConf Taiwan 2025
I’ve been using Ruby since 2019, six years of writing it in a professional setting. Matz (Yukihiro Matsumoto) is the original language designer, the author of the original Ruby implementation and an important figure in the Ruby mythos. I’ve always been inspired by the philosophy behind Ruby’s desigm. Great importance was placed upon designing a language with simplicity, elegance and developer happiness in mind. Since the beginning, I had wanted to witness Matz’s keynote and meet him in an in-person Ruby conference. It felt almost like making a pilgrimage, an act of honoring both the language and its creator. If not for COVID-19, my first Ruby conference would have been RubyKaigi 2020 in Matsumoto, Nagano. Fast forward 5 years later, I managed to attend Matz’s keynote in COSCUP x Rubyconf 2025.
This post is a summary of Matz’s keynote: Programming Language for the AI Age.
Disclaimer: This summary isn’t a complete transcript and may contain errors. Any mistakes are my own.
How do you get an AI Agent to work?
Matz shared an example task. He wanted to port the Set
class in mruby
from Ruby to C, using an AI CLI tool. First, he needed to set up the background and policy as the context for the AI agent to work from. Then, he would further break down the deliverables in a plan. The plan is written to a file so the AI Agent in his local environment can read it. At that time, the AI Agent lacked long-term memory. He then prompted the AI Agent to generate code according to the plan he wrote. When the AI Agent could not one-shot achieve his request, he would supply additional information or give further instructions so the AI Agent got closer to what he wanted. In other words, there’s an observable lifecycle:
- Setting up background context, policies and deliveries.
- Writing deliverables into a plan.
- Prompting the AI to generate code from the plan.
- Supplying additional details.
- Giving further instructions to refine the output.
- Evolving the generated code so the AI Agent produces what you want.
It’s like pair programming with an AI Agent, except, as Matz noted, you can control the pacing, unlike when working with a human partner.
Human Language vs AI Communication
Language sits at the heart of human thought and system building, human thought and systems development. Documents and specifications can be described in human language. In implementation, the code and tests are written in programming language. Matz realized that software systems are very much constructed from language, no matter human or programming. Looking at AI from the perspective of Large Language Models (LLMs), they are trained from mountains of knowledge materials. They resemble an emergence of intelligence, meaning that it’s fit for software development, given it took root from human languages. Today, our common way to communicate with an AI Agent is through text in human language. However, it often requires lots of text. Matz recalled there’s a fairly low chance that he could one-shot complete a task with a single prompt. It typically took more than one prompt, needing to introduce more context so it does the job. To speak to an AI Agent, we must describe expectations in human language and verbalize all our instructions into text. Sometimes, this is not easy to do. This is essentially a dense communication process, emphasizing clarity and descriptiveness from the human to the AI Agent. Matz thought human language might not be the most effective medium for AI interaction.
Side note: This reminds me of a technique from ElevenLabs where two AI Agents, recognizing each other as AI Agents, started conversing in GibberLink—a more efficient mode of communication transmitting structured data over sound waves instead of words. It reminded me of dial-up modems in the olden days and the Binharic chant from Warhammer 40k.
What AI Does Well
Matz noted AI Agents excel at giving compliments, in which the user is always absolutely right. Given the nature of “AI Agent” is basically an LLM, it is naturally able to explain code well, gleaning insights from the code structure natively. Given the ability to search the internet, it can aid you in researching any topics and coherently putting knowledge together. One thing it did exceptionally well is implementing mathematical algorithms. Matz had tried to implement the Karatsuba algorithm unsuccessfully for three months, while the AI Agent did so in fifteen minutes, to his amazement.
An AI Agent makes for a surprisingly effective pair programming buddy. It can be productive by running many tasks in many sessions and generating a large amount of code. Therefore, AI coding is faster thanu “organic coding.” For this, Matz jested that code written by humans is therefore known as “organic coding.” It can also help to generate documentation and comments in code superbly, displaying great knowledge from its training. Right from the start, using AI Agents lets you iterate rapidly. If you’re not confident in everything being generated by AI, you could still get it to generate code and then generate tests for that code as skeletons to work on.
The Tricky Side of Working with AI
Matz brought up two terms: Alpha Syndrome and Reverse Alpha Syndrome. He used a brief pet-owner analogy to explain Alpha vs. Reverse Alpha Syndrome, Matz described his dog as stupid and requiring training to do tricks. After receiving more treats, the spoiled dog might think of itself as in command of the human, displaying Alpha Syndrome. In reverse, Reverse Alpha Syndrome can happen to humans as masters, and technology as the servant. With AI becoming more sophisticated and woven into daily processes, we push more tasks to it and reap the fruits. Humans unconsciously build reliance. AI’s debugging is inconsistent and its task management clumsy. As a result, fixing its output still remains a manual process. When we micromanage AI, we risk becoming its servants instead of its masters.
That would be a bad timeline. humans should remain the master and AI the servant. Important decisions, designs, and policies should remain human-driven. Despite an AI Agent’s ability to interpret a code repository, it cannot match the deep knowledge of an experienced engineer. AI is no silver bullet and can struggle with bugs if it lacks memory and context against some bugs without long-term memory and context. Therefore, Matz discourages leaving everything to AI. To better help AI produce the right output, humans need more knowledge and experience than the AI to guide it effectively.
Matz sees AI Agents as a fun new toy, but it has downsides: it can reduce the enjoyment of problem-solving and coding, leaving humans with more tedious tasks. Problem-solving and coding are enjoyable tasks that AI can now take over. With AI Agents doing that, humans are left with less enjoyable tasks. For example, just as the printing press proliferated books, it increased operational work for workers and took away some of the enjoyment of writing. Programming may stop being fun with AI.
When it comes to productivity, if it becomes the sole priority of a company, AI agents could fully replace humans.
What a Programming Language Needs for AI
Effective communication between humans and AI is still a gap waiting to be filled. Matz outlined three key aspects for an AI-focused programming language:
- Concise – Tokens are the fundamental units of LLMs. The language must describe tasks with fewer tokens, with minimal grammar and high readability.
- Expressive – The ability to describe real-world problems, large or small.
- Extendable – The ability to add vocabulary via classes, modules, methods, or DSLs.
Luckily, Matz points out that Ruby already meets these criteria.
Parting messages
Matz ended with a few thought-provoking challenges for the future of AI Agents. AI can do software engineering work for a start. When systems needs maturity, hire human expertise to polish them. Current AI cannot polish beyond its limits. Matz recommended human to understand their own systems better to guide AI. He reckoned we currently lack effective means of communication with AI, and improvements are needed here. Philosophically, Matz prefers a future where humans are the focus rather than organizations seeking productivity by cost-cutting, emphasizing human experience, the joy of programming, and above all, having fun.
Thank you Matz, this is a beautiful end to a keynote.
You can watch the keynote here: https://www.youtube.com/watch?v=dXoo5MtUVvk.
I also took a photo with Matz. Thank you sir.
Personal take
Reflecting on my own experience with Ruby and AI, I strongly agree that programming is fundamentally about building theories. If you're new to this idea, consider Cekrem's article as your next read. In layman’s terms, you build theories on how a system should work, a mental model. By thinking about your system and concluding how it should work, you build mental models, muscle memory and gain mastery. Using AI for software engineering seems to prohibit that sort of learning and can shortcut the learning process, giving an illusion of mastery, though one can still learn something from using an AI.
Side note: I've also used AI to help refine this blog post, suggesting edits for clarity, flow, and readability.
Disclaimer: This summary isn’t a complete transcript and may contain errors. Any mistakes are my own.