The Last Program We Ever Wrote

When I began working on neural network research, our goal was audacious: to design the last program we would ever write. We wanted to create a neural network capable of designing any other neural network. It was 1998. Neural networks were still in their infancy—hard to train, limited by scarce data, and constrained by computing power. But with the rise of camera phones in the early 2000s, the world changed. Every mobile device could now capture images, and social networks began tagging enormous volumes of data. These same social media giants then leveraged their vast data and neural networks to tag, categorize, and promote even more content. Data—and the ability to learn from it—exploded. Fast forward to 2017–2022, and our dream began to materialize. Neural networks learned to write text. Large Language Models (LLMs), powered by the Transformer architecture introduced in 2017, could predict the next word in a sentence—and thus generate entire passages from a simple prompt. Because they could write text, they could also write code. With the rise of GitHub, the largest public repository of source code ever assembled, these models had access to humanity’s collective programming knowledge. Suddenly, LLMs could read and write code on their own. We had done it. We had written the last program—because now, the neural networks could write the rest. By 2025, coding-capable LLMs have become the dominant force in software creation, and their abilities continue to expand rapidly. Just two years ago, we could only dream of models capable of generating entire multi-file projects. Early LLMs could write simple routines in a chat window, but couldn’t yet manage complex, iterative workflows—updating, testing, and refining code across multiple files. All of that changed in the past few years. Today, agentic coding systems are extraordinarily productive, capable of building, debugging, and maintaining software almost autonomously. Another remarkable development is code translation. Modern LLMs can now translate from one programming language to another as naturally as they translate between human languages. In the past, this was slow, tedious work requiring deep expertise in both languages. Today, it hardly matters which languages you know—LLMs can seamlessly transform entire codebases across paradigms and ecosystems without hesitation. An intriguing consequence of this progress is that digital hardware—especially processors and other chips—is also designed through code. This means that LLMs can, in principle, design not only the software that runs on hardware, but also the hardware itself. Automated coding continues to improve daily. User interface (UI) design and automated code testing are becoming increasingly sophisticated. Even now, I’m impressed by how coding agents can perform complex UI testing driven entirely by natural language prompts. The value of LLMs and generative AI in text-based tasks like programming is undeniable. They are transforming not only how we write code—but how we conceive of software itself.
Implications
Because we now have computers capable of writing complex code, tasks that were once difficult are becoming remarkably easy. Even individuals with limited coding experience can now build web applications—or even mobile apps—to a surprising degree. And all of this can be done rapidly: what used to take weeks or months can now be achieved in just a few hours of collaboration with a large language model. Today, a two- or three-hour coding session can produce what once required a team of engineers and extensive development cycles. For expert developers, this means accomplishing in hours what entire teams used to complete over much longer periods. In the past, we used to call exceptionally productive engineers “10x engineers”—those who could design and implement large projects with extraordinary speed and precision. But today, the true 10x engineer is the large language model itself. We have, in many ways, automated ourselves out of the job. And as of now, the effects of this shift are already rippling through society. The first and most direct consequence is in the job market. Computer scientists and software developers may find it increasingly difficult to secure traditional coding roles, as LLMs can perform many of these tasks more efficiently and at lower cost. Software development teams are becoming smaller, and the overall structure of software design is changing. The ideation and architectural stages—once followed by large implementation teams—are now led by a few humans guiding the AI. Computer science degrees will still hold great value, especially for understanding complex systems, architectures, and networks. After all, computer science is not just about writing code—it’s about understanding computation itself. However, the routine act of coding, which once consumed much of a developer’s time, is rapidly being automated. That said, designing complex systems will continue to require human expertise—at least for now. The creative process of deciding what to build, how to structure it, and how to meet human needs still benefits from deep domain knowledge, intuition, and context that AI systems have yet to fully replicate.
Prospects
Maybe details don’t matter as much anymore. So, what’s next for young people interested in computer science and aspiring to build a career in this field? I would suggest focusing on the big picture—understanding systems, and even systems of systems. For example, no coding agent today can physically assemble a computing rack or a gaming PC, though they can offer excellent recommendations on which components to use. Learning how to architect systems remains an essential skill. Requirements and specifications translate into different system architectures, and effectively guiding AI tools in design still requires a solid grasp of how systems work. A generic prompt like “design a computer rack” won’t yield an optimal or balanced result on its own. You still need to specify details such as connectivity, data rates, processors, thermal loads, and target devices. AI tools can assist with the design process, but they may not always select the perfect components—or even understand your priorities without clear direction. LLMs also make mistakes and can produce incomplete or inaccurate solutions, often because our prompts are too vague or our specifications too sparse. So, do details still matter? Of course they do—especially if you want your design to meet precise requirements. However, with AI tools, we no longer need to manage every detail ourselves. Instead, we must focus on the broader design intent—the big picture that guides the details. And, of course, AI still can’t build physical systems. We’re still waiting for the day when robots can assemble that computing rack for us.
The window is closing
As AI tools become more capable, they are taking on an ever-growing share of our daily tasks.
Office automation has been developing for decades, but only recently have we reached the point where AI systems can perform many of the cognitive and creative tasks once reserved for humans. Today, we have tools that can:
- Redact and edit text
- Improve writing (even this very passage)
- Translate between languages
- Search, sort, and organize data
- Summarize long or complex content
- Research topics and synthesize information
And now, AI is transforming the world of computing itself. In just the past few years, large language models have demonstrated remarkable abilities in:
- Writing code across multiple programming languages
- Translating code from one language to another
- Planning and managing multi-file software projects
- Debugging and testing complex systems
In the near future, once AI systems cross the physical barrier through advanced robotics, they will be able to assist with—or even perform—many of the physical tasks humans do today. The combination of intelligent software and capable robotics will mark the next major wave of automation, one that extends from digital workspaces into the real world.
So what should we learn to prepare for this future? Traditional disciplines will remain valuable, but their roles will evolve. Humans will increasingly act as directors or architects—guiding AI tools rather than performing every detailed step themselves. AI systems are already strong in planning, design, and architecture, but they still need humans to define goals, constraints, and specifications—in other words, to tell them what to build and why.
Therefore, the key skill for future generations will not be manual implementation but conceptual thinking—the ability to specify, design, and guide. We can safely let go of many implementation details, just as most of us no longer understand the inner workings of a radio, a television, a computer, or a printer—yet we use them effectively every day.
In the age of intelligent automation, understanding the big picture—how systems interact, what problems to solve, and how to communicate intent—will matter far more than knowing every technical detail.
about the author
I have more than 20 years of experience in neural networks in both hardware and software (a rare combination). About me: Medium, webpage, Scholar, LinkedIn.
If you found this article useful, please consider a donation to support more tutorials and blogs. Any contribution can make a difference!