Things I Learned About Code Without Ever Trying To
What building with AI accidentally teaches you.
When I built my first tool a few weeks ago, I wasn't trying to learn anything. I was trying to stop doing a tedious job manually. The learning was incidental, almost accidental, a side effect of actually making something work.
That's what makes vibe coding strange and interesting. You don't sit down with a curriculum. You sit down with a problem, and the concepts come to find you.
The Problem, Briefly
The task I was automating involves tracking how institutional funds mark private investments over time. Funds registered with the SEC are required to file quarterly reports called NPORT-P disclosures. These filings contain their full portfolio holdings, including how they're valuing private companies like Anthropic, OpenAI, or SpaceX. If you want to know how different funds are pricing the same asset across reporting periods, the data is technically public. It's just buried inside hundreds of XML filings on the SEC EDGAR database, and getting to it manually is brutally slow.
So I described the problem to Claude Code, and we started building.
APIs: The Internet Is Just Apps Talking to Each Other
The first thing that clicked was APIs. I'd heard the term before without really understanding it. What I learned by actually using one is that it's a structured way for one piece of software to ask another for information. The SEC EDGAR system has a public API. My tool sends it a query—say “give me all NPORT-P filings that mention Anthropic”—and EDGAR sends back a list of results. That's it. That's an API call.
Once you've done it once, the abstraction collapses. The internet stops feeling like a series of websites and starts feeling like what it actually is: a giant network of systems exchanging structured data. I didn't read about that in a textbook. I just watched it happen in the terminal.
Rate Limits: The SEC Will Cut You Off
Here's something I ran into that I didn't anticipate. The SEC's EDGAR system has rate limits. Specifically, it asks that automated tools don't exceed roughly ten requests per second. Hit that ceiling and your requests start getting blocked.
The tool handles this by fetching filings in parallel batches of five, with a fifty-millisecond delay between requests to stay within the limit. Claude Code wrote that logic, not me. But understanding why it was there—that any large system serving millions of requests has to protect itself from being overwhelmed—was something I genuinely hadn't thought about before. Rate limits aren't bureaucratic friction. They're load management. Once you've hit one, you get it.
The .env File: Secrets Shouldn't Live in Your Code
The SEC requires that any tool making automated requests to EDGAR identify itself with a name and email address in something called a User-Agent header. Without it, your requests get flagged or blocked.
The tool handles this through a .env file, a simple text file that sits locally on your machine and holds configuration values that shouldn't be hardcoded into the codebase itself. That file also gets listed in .gitignore, which means it never gets committed to GitHub. This was one of those moments where a practical decision taught me a security principle. You don't put secrets in your code because code gets shared. Environment variables are how you keep them separate. Obvious in retrospect, not obvious before.
Localhost: Your Computer Is a Server
When the tool is running, you open it by going to http://localhost:3002 in your browser. I'd seen that before and mostly glazed over it. What I now understand is that your computer can run a server—a program that listens for requests and sends back responses, just like any server on the internet. Localhost is just the address that points back to your own machine. The tool I built has an Express server running locally that handles requests from the frontend and goes out to EDGAR on your behalf. Frontend asks the server, server asks EDGAR, EDGAR responds, server passes it along.
Once you understand that, a lot of things about how software is structured start to click.
Backend Logic: Data Doesn't Arrive Clean
NPORT filings are XML files, and some of them are enormous. The tool parses documents with hundreds of thousands of lines of text to find the handful of data points that actually matter: shares, market value in USD, currency, exchange rate, then price per share calculated as market value divided by shares.
That logic runs on the server, invisibly, before anything reaches the screen. I didn't write it, but I understood what it needed to do because I had to describe the problem clearly enough for Claude Code to build it. Knowing what you want the data to look like forces you to understand how the data is structured underneath. You can't really shortcut that part.
Where This Leaves Me
I went into this project trying to eliminate a tedious manual workflow. I came out with a working tool and, without really meaning to, a rough mental model for how software fits together.
Something I've started to notice: building forces you to encounter real concepts, in context, when they matter. APIs, environment variables, rate limits, servers running on your own machine. None of it was on my radar six weeks ago. I didn't go looking for any of it. It just showed up because the project needed it to.
I'm not learning to code. But I'm learning what code does, and why it works the way it does. Honestly, I'm not sure there's much difference.