I set up my first AWS account about six years ago. Within three weeks I had a bill for just over four hundred quid. I’d left a NAT gateway running, forgotten about an RDS instance I’d spun up to test something, and had CloudTrail logging to S3 in a way that was generating thousands of tiny PUT requests. All stuff I could have avoided if someone had walked me through the basics first.
So here’s the walkthrough I wish I’d had.
Set up billing alerts before you do anything else
This is step one. Before you launch a single EC2 instance, go to the Billing dashboard and set up a budget. AWS Budgets lets you set a monthly threshold and get an email when you hit 50%, 80%, and 100% of it. Set it at whatever you’re comfortable with for experimentation. Even fifty quid is fine. The point is you’ll know when something unexpected starts costing money.
While you’re there, enable Cost Explorer. It takes about 24 hours to start showing data, so turn it on now and forget about it until tomorrow.
Use the free tier properly
AWS gives you 12 months of free tier on a lot of services. The catch is the limits are specific and easy to exceed without realising.
A few things that catch people out:
- EC2: You get 750 hours of t2.micro or t3.micro per month. That covers one instance running 24/7, not two instances running 12 hours each. Run two and you’re paying for the second one.
- RDS: Same deal, 750 hours of db.t3.micro. But only for Single-AZ. If you tick the Multi-AZ box because it sounds sensible, that’s not free tier.
- S3: 5GB of standard storage and 20,000 GET requests. If you’re using S3 for logging or file uploads during development, keep an eye on it.
- Data transfer: 100GB out to the internet per month. Sounds like loads until you’re serving images or running an API with decent traffic.
Bookmark the free tier page and check it before spinning up anything new.
One account is a bad idea
This is something we tell every client starting on AWS: don’t put everything in one account. Use AWS Organisations from the start with at least three accounts.
- A management account for billing and organisation-level stuff. Don’t run workloads here.
- A development account for messing about and testing.
- A production account for anything live.
It costs nothing to set up and it means your dev experiments can’t accidentally affect production. It also makes cost tracking much cleaner because you can see spend per account without needing to tag everything perfectly.
If you’re a solo developer or a tiny team, you might think this is overkill. It’s not. I’ve helped three different startups unpick single-account setups and it’s always painful.
Stop using the root account
The root account has full access to everything, including closing the account, changing billing, and things IAM policies can’t restrict. The first thing you should do after creating your account:
- Enable MFA on the root account. Use a hardware key if you can, or at least an authenticator app.
- Create an IAM user (or use IAM Identity Centre if you’re setting up Organisations) for your day-to-day work.
- Stop logging in as root. Put the credentials somewhere safe and don’t touch them unless you need to.
We’ve seen businesses locked out of their own AWS accounts because someone left and they were using root credentials on a personal email. Don’t let that be you.
Pick a region and stick with it
Unless you have a specific reason to use multiple regions (like serving users in different continents), pick one region and put everything there. For UK businesses, eu-west-2 (London) is the obvious choice.
Running resources across regions costs money in data transfer and makes networking more complicated. You can always expand later when you need to.
Use infrastructure as code from the start
I know it’s tempting to click around in the console when you’re learning. But get into the habit of using CloudFormation or Terraform early. Even simple stuff.
The reason is simple: if you build something in the console and then need to recreate it, or explain it to a colleague, or figure out what changed, you’ve got nothing to refer back to. With infrastructure as code, your setup is documented, repeatable, and version controlled.
You don’t need to be an expert. Start with the basics. A VPC, a subnet, a security group. Learn the pattern and build from there.
Security defaults that matter
AWS gives you a lot of rope. Some sensible defaults to set up early:
- S3 Block Public Access: Turn this on at the account level. It prevents any S3 bucket from being made public by accident. You can override it for specific buckets if you need to, but the default should be locked down.
- CloudTrail: Enable it. It logs every API call made in your account. When something goes wrong (and it will), this is how you work out what happened.
- Security groups: Only open the ports you need. Don’t leave SSH (port 22) open to 0.0.0.0/0. Restrict it to your IP or use Systems Manager Session Manager instead, which is free and doesn’t require opening any inbound ports.
- Default VPC: AWS creates a default VPC in every region. It’s fine for quick tests but don’t run production workloads in it. Create a proper VPC with private subnets for anything serious.
Don’t over-architect on day one
It’s easy to read AWS best practice guides and feel like you need a service mesh, a CI/CD pipeline with six stages, and a multi-region active-active setup before you launch anything. You don’t.
Start with what you need. A VPC, an EC2 instance or a container, a database if you need one. Get something working. Then improve it.
The teams that get into trouble are the ones that spend two months building the perfect infrastructure and never ship anything, or the ones that skip the basics and end up with a security incident or a surprise bill in month two.
There’s a middle ground. Set up the boring stuff (billing alerts, separate accounts, MFA, basic logging) and then build iteratively.
Common first-month mistakes
Things I see regularly when reviewing new AWS setups:
- Elastic IPs: If you allocate one and don’t attach it to a running instance, AWS charges you for it. It’s not much, but it’s the principle. Clean up after yourself.
- EBS snapshots: They accumulate. Set up a lifecycle policy or you’ll have hundreds of them in six months.
- Lambda cold starts: Not a billing issue, but if you’re using Lambda for an API and wondering why it’s slow, this is probably why. Keep functions warm or use provisioned concurrency for latency-sensitive stuff.
- Leaving dev environments running overnight: A t3.large costs about thirty quid a month running 24/7. If you only need it during working hours, shut it down at night. Or use an auto-scaling schedule.
Where to go from here
Once you’ve got the basics in place, the natural next steps are:
- Set up a proper CI/CD pipeline so you’re not deploying from your laptop
- Move to containers (ECS or EKS) once you’ve outgrown a single EC2 instance
- Implement proper monitoring with CloudWatch alarms
- Look at Savings Plans once your usage is stable and predictable
AWS is a great platform, but it rewards preparation. Spend an afternoon on the boring setup stuff and you will save yourself weeks of headaches later.