Reflections on Using LLMs to Learn Rust

· 10min · Joe Lopes
A robot assisting a human to fix a machine in a dark workshop.

A couple of months ago, I decided to learn Rust and started with The Rust Programming Language—more on this here 🦀. In today's world of ever-present LLMs, I found myself chatting with a few to help make sense of certain concepts. At the same time, I disabled LLM integration in my code editor while working through the exercises. That contrast got me thinking about how my learning process has changed, and how it compares to the way I used to study. This post is a reflection on that shift: its upsides, its drawbacks, and what it might mean for how we learn today.

info
Disclaimer

I'm not an AI expert, so these are just my impressions of using it.

From Books and Peers to the Internet

Years ago, when I started learning Pascal 📜 in college, I used the Borland Turbo Pascal 7.0 IDE and a book. Not much changed when I later learned C 🅲 and Java ☕: it was basically just the compiler, the book, and me. Occasionally, I'd have a colleague who was also learning the language, and we'd exchange knowledge. This was great, especially when the other person had more experience, because I could learn not only from the book and my own attempts at solving exercises, but also from those peer interactions.

The learning process isn't just about reading, doing exercises, and successfully compiling programs; it's also about analyzing what you're learning, making connections to other topics, and reflecting on the subject itself. That includes thinking about the pros and cons of a feature, how it was designed, how it's meant to be used, and how it interacts with other features. It's important to remember: a programming language is not the end; it's the means. The goal is to solve real problems.

A few years after college, I began learning Python 🐍. The process was a bit different by then: I had better internet access and could read documentation and articles in English. The rise of Web 2.0 was transforming the internet from a static collection of pages into a more dynamic, user-generated space. With it came more blogs, websites, and online manuals. So, my learning experience expanded beyond the book and interpreter: I could now search online to complement and reinforce what I was studying. Unfortunately, having finished college, I no longer had peers to learn with. But I had gained the internet.

With the proliferation of Web 2.0, forums and communities started to flourish, providing new ways to share experiences with others. Most of these communities were English-based, which was a challenge for me as a non-native speaker. But eventually, one of them became a game-changer for anyone learning a new programming language: Stack Overflow. More often than not, someone had already encountered the same problem, asked about it, and received an answer. While it sometimes took effort to map your own problem to those existing answers, when it worked, it felt like a huge achievement.

The New Era: Learning with an LLM

So now, in 2025, I've started learning a new language: Rust 🦀. I often find myself with the book open, the editor/compiler ready, and an LLM chat running in parallel. Intuitively, instead of turning to Google or Stack Overflow, I ask the LLM my questions. Like a helpful peer, it replies. Not with a generic, pre-made answer like you'd find on Stack Overflow or a blog, but with something tailored to my specific context. No more searching, scanning, and adapting answers to fit my situation.

I've noticed the LLM can take on different roles depending on how I use it. So far, I've identified at least two distinct personas: the assistant and the programmer.

As an assistant, the LLM helps by offering insights, creating better examples, writing snippets, troubleshooting errors, or even discussing more abstract or philosophical ideas. It feels like having a human expert or mentor by your side. If it's not obvious already: this dramatically accelerates learning. But, of course, that speed comes with trade-offs—more on that later.

The second role is the programmer. Here, you act as the project manager: you define the goals, constraints, and requirements, and the LLM implements them. Yes, I'm talking about vibe coding 🌈. Personally, I'm not a big fan, but I recognize its utility; especially for quick scripts or small, well-defined tasks. It's not really learning, though, since you're delegating the problem-solving and implementation to the model. That said, if you take the time to read and understand the output, there's still something to gain.

note
Note

I'm not judging people who practice vibe coding, but I personally believe that projects developed without a structured process are often prone to security flaws and other issues. That said, while writing this post, I came across an emerging approach called Spec-Driven Development (SDD). In SDD, you begin with a precise, machine-readable specification (Markdown) and use an LLM to generate the code. It feels like a more engineered, and potentially more secure, form of vibe coding. GitHub’s Spec Kit is a great example of this in action. 🪙🪙

Having access to an assistant or programmer like this would've been unimaginable just a few years ago. Here's how I've been using LLMs in practice:

  • Ad-hoc: Starting fresh chats with one or more LLMs to explore specific topics or language features. This works well because, in my experience, conversations can get biased by earlier prompts, which sometimes leads to misleading answers. Starting a new thread resets that context; but yes, having to re-explain everything is a bit tedious.

  • Simple editor integration: This is like Copilot: your editor becomes smart enough to auto-complete lines or entire code blocks. It's great for productivity and vibe coding, but less so for learning. In fact, when you're trying to understand something deeply, this can be distracting, as you're constantly dismissing suggestions. That's why I disabled it while learning Rust.

  • Agent integration: This is when you bring the chat into your code editor, giving the LLM full context of your entire project—multiple files, configurations, docs, etc. The results feel almost magical: with complete context and agent-like capabilities, the LLM can make highly precise suggestions or even directly modify your code. It blurs the line between assistant and programmer in a very useful way.

LLMs also shine as debuggers. For example, while implementing base64 encoding/decoding, I accidentally used a padded engine for encoding and an unpadded one for decoding, a classic error. Once I narrowed it down to those two functions, I asked the LLM to investigate. It immediately spotted the mismatch and offered the correct fix. That alone saved me 30–60 minutes—recall I'm still new to Rust.

The Pitfalls and Limitations of LLMs

While LLMs can be incredibly helpful, they can also pull you away from the learning process, depending on how you use them. At first glance, they might seem perfect. But no, there are real drawbacks, hallucinations 🤪 being one of the biggest. Eventually, the model might suggest a feature, module, or behavior that simply doesn't exist. While better prompts can reduce this issue, it's serious and the main reason why every LLM output must be reviewed carefully.

Sometimes, you even need to restart the chat because the model seems stuck, unable to follow a coherent line of thinking. It's like you have to refresh its memory. I've noticed moments where the LLM becomes lazy, offering shallow or repetitive answers, like suggesting dozens of irrelevant modules instead of focusing on fixing an actual error. Interestingly, being a bit more assertive or even harsh with the model often gets it back on track. Fabio Akita mentioned this in the Flow podcast: since the model is designed to please, clearly expressing dissatisfaction can cause it to "rethink" and improve its responses.

Another memorable experience happened while discussing an encryption/decryption feature. I forgot to mention that the user needed to supply the private key. When I asked the LLM to generate the code, I noticed it had embedded both the public and private ⚠️ keys directly in the code. This kicked off a conversation about security. The model didn't suggest alternatives like reading the key from a file or an environment variable: it just embedded the key and didn't raise any warning. Once I asked it to add an argument for the private key, boom: it removed the hardcoded key from the code.

I've also seen organization issues. The LLM sometimes adds unused variables or modules, making the code unnecessarily messy. And when working with crates that recently had major version updates (like v1 to v2), it often suggested outdated syntax or features, resulting in code that wouldn't compile. In those cases, I had to turn to the documentation and real examples to get things working.

Ultimately, accelerated learning with LLMs comes with a tradeoff: if you rely too heavily on them, you risk weakening your own reflection and critical thinking. It's like depending on a knowledgeable peer who might suddenly disappear: if you lose access to the LLM, you could be stuck. That's why it's crucial to keep your learning process active. Use the LLM as a tool, but always question the answers and stay engaged with the underlying concepts 💡.

Conclusion

I'm very positive about the advantages of using LLMs for learning. However, you need a clear direction, like a solid book or structured guide, and the LLM should serve as an assistant along that path. Since LLMs are often embedded in code editors, it's worth disabling them while practicing. These models have already consumed most available programming books and Git repositories, so when you're working through exercises, they can end up autofilling the entire solution, which defeats the purpose.

You also need to be mindful that LLMs can't replace your own experience—especially when it comes to making mistakes. At one point, I noticed my brain getting lazy; instead of trying something out and learning from the results, I kept asking the LLM whether it would work. That's not ideal, because a big part of the learning process involves running into errors, reading error messages, interpreting them, and working through the solutions yourself.

At the end of the day, LLMs are powerful learning tools as they can significantly speed up your progress. With an on-demand expert assistant at your side, you can not only learn faster but also start building things more quickly. In development, an LLM can act like a pair programming peer, helping you make decisions and implement ideas.

However, like any assistant or peer, it doesn't replace real understanding. If you don't grasp the underlying technology, you're likely to accept whatever the LLM suggests, just as you might with a human colleague. In that sense, it's not a new problem. That's why having a foundational resource is essential. It serves as the backbone of the learning process.

By using LLMs, I estimate that I sped up my learning by more than three months, a great example of how AI can boost productivity. This aligns with Linus Torvalds' 🐧 perspective that LLMs are tools meant to help us get better at what we do. So yes, LLMs are a fantastic resource, as long as you don't sacrifice your own understanding 🥇. Keeping these warnings in mind will help you make the most of them. 👊