I got curious.
Everyone talks about how AI is “transforming work” and “10xing productivity.” The internet is full of people claiming they replaced entire teams with ChatGPT.
So I decided to actually test it. For 30 days, I would use AI for everything I reasonably could. Every email, every document, every creative task, every decision, every meeting prep — if AI could help, I’d try it.
Here’s what actually happened.
The Rules
Before we get into results, here are the rules I followed:
- Try AI first. Before doing any task manually, I had to at least attempt it with AI.
- Track everything. Time spent, quality of output, whether I ended up using the AI result.
- Be honest. If AI made things worse or slower, I’d record that too.
- Use real work. No toy examples. Actual emails, actual projects, actual decisions.
I used mostly Claude and ChatGPT, switching based on task type. I also used various skills from our library (yes, this is a plug, but it’s also true).
Let’s go week by week.
Week 1: The Honeymoon Phase
Tasks attempted with AI: 47 Tasks where AI helped: 38 Tasks where AI made things worse: 3 Time saved estimate: ~4 hours
What Worked Immediately
Email drafting — This was the obvious winner. I’d write a rough brain-dump, paste it into Claude, and get back a clean version in seconds. I probably send 20-30 emails a day, and AI cut my drafting time by at least 60%.
Meeting prep — Before calls, I’d paste in context about the person/company and ask for talking points, potential questions they might ask, and things to look up. Surprisingly useful.
Summarizing long documents — I had a 40-page contract to review. Asked AI to summarize key terms and flag anything unusual. It caught two things I would have missed, including a weird IP clause.
What Didn’t Work
Writing social posts — The output was generic and felt fake. I’d spend more time editing than if I’d just written from scratch.
Creative brainstorming — I asked AI for product name ideas. The suggestions were either boring (“TaskFlow,” “WorkHub”) or trying too hard (“SynergiMax”). Not helpful.
Anything requiring my specific voice — AI doesn’t know how I talk. First drafts needed heavy editing.
Week 1 Verdict
Productivity gain was real but concentrated in specific areas. AI excelled at structured tasks (emails, summaries, prep) and struggled with anything requiring creativity or personal style.
Week 2: Finding the Edges
Tasks attempted with AI: 52 Tasks where AI helped: 41 Tasks where AI made things worse: 5 Time saved estimate: ~5 hours
New Discoveries
Code debugging — I’m not a developer, but I manage a small site. When something broke, I’d paste the error into Claude and get an explanation plus fix. This saved me from Stack Overflow rabbit holes.
Learning new concepts — I needed to understand unit economics for a project. Instead of reading articles, I asked AI to explain it like I’m smart but new to the topic, with examples from SaaS businesses. 10 minutes instead of 2 hours.
Editing my own writing — Not first drafts (still bad at those), but editing. I’d write something, then ask “make this 30% shorter without losing meaning.” The suggested cuts were almost always good.
Failures
Anything with nuance — Asked AI to help craft a sensitive message to a team member. The output was technically correct but emotionally tone-deaf. Had to rewrite completely.
Fact-checking — AI confidently told me a company was founded in 2018. It was founded in 2015. Learned to verify anything factual.
Complex analysis — Asked for competitive analysis of a market I know well. The output was surface-level and missed key players. AI doesn’t have insider knowledge.
Week 2 Verdict
Started to understand where AI has blind spots. Great for “general knowledge” tasks, unreliable for anything requiring deep expertise or emotional intelligence.
Week 3: Building Systems
Tasks attempted with AI: 61 Tasks where AI helped: 52 Tasks where AI made things worse: 2 Time saved estimate: ~7 hours
The Breakthrough
I realized I was wasting time re-prompting for similar tasks. So I built systems:
Email Templates with Variables — Created a prompt for each email type (intro, follow-up, proposal). Now I just fill in the blanks.
Standard Prep Docs — Before any meeting, same prompt: background, talking points, risks, questions. Saved as a shortcut.
Content Outline Generator — For any piece I need to write, a prompt that generates structure. I fill in the substance.
This changed everything. Instead of crafting prompts each time, I’m loading pre-built ones. Much faster.
Unexpected Wins
Meal planning — Gave AI my dietary preferences and asked for a week of dinners. Simple, worked perfectly, not sure why I didn’t think of this earlier.
Travel planning — “Build a 3-day itinerary for [city] focused on [interests]. Mix touristy and local spots. Include restaurants.” Better than most travel blogs.
Decision frameworks — Stuck on a decision? AI would walk me through a decision matrix without me having to set one up. Genuinely clarifying.
What I Stopped Using AI For
Creative writing — Gave up entirely. AI’s creative output is mid at best, and editing it to not sound like AI took longer than writing myself.
Anything confidential — Started being more careful about what I pasted into AI. Not paranoid, just thoughtful.
Week 3 Verdict
Systems are the key. AI gets much more useful when you build repeatable workflows instead of starting from scratch every time.
Week 4: The New Normal
Tasks attempted with AI: 58 Tasks where AI helped: 51 Tasks where AI made things worse: 1 Time saved estimate: ~6 hours
Final Observations
By week 4, AI use became automatic. I didn’t think “should I use AI for this?” — I just did, for certain task types.
Tasks I now always use AI for:
- Email drafting and editing
- Summarizing long content
- Meeting prep and follow-up
- Explaining concepts I don’t understand
- Initial research (followed by verification)
- Data formatting and cleanup
- First-pass editing of my own writing
Tasks I learned AI can’t help with:
- Anything requiring my specific voice or personality
- Creative work that needs to feel original
- Sensitive interpersonal communication
- Deep analysis in areas I know better than AI
- Anything requiring current/specific facts
The Final Numbers
Total tasks attempted with AI: 218 Tasks where AI meaningfully helped: 182 (83%) Tasks where AI made things worse: 11 (5%) Tasks where AI was neutral: 25 (12%)
Estimated time saved: ~22 hours over 30 days That’s roughly 45 minutes per day.
Not the “10x productivity” the hype promised, but meaningful. 45 minutes daily is ~275 hours per year. That’s time I can spend on things that actually require a human.
The Honest Assessment
AI is great for:
- Structured output — Emails, summaries, outlines, templates
- Explaining things — Better than Googling for most concepts
- First drafts of routine content — Saves the blank-page anxiety
- Processing information — Reading, extracting, reformatting
- Learning — Ask follow-up questions, get instant explanations
AI is mediocre for:
- Anything creative — It’s derivative by nature
- Anything personal — Doesn’t know your voice, context, or relationships
- Analysis in areas you know well — You’ll spot the gaps immediately
- Current information — Knowledge cutoffs and hallucinations are real
AI is bad for:
- Replacing thinking — It can assist decisions, not make them
- Emotional/sensitive communication — Technically correct, humanly wrong
- Anything requiring verification — Always check facts
What Stuck (The Keepers)
After 30 days, here are the AI uses that became permanent parts of my workflow:
| Use Case | Skill/Prompt | Frequency |
|---|---|---|
| Email polishing | Professional Email Writer | Daily |
| Document summary | Executive Summary Generator | 3-4x/week |
| Meeting prep | Custom prompt | Before every meeting |
| Concept learning | AI Tutor | Weekly |
| Writing editing | Custom “make shorter” prompt | Every time I write |
| Decision frameworks | Decision Matrix Creator | Monthly |
These tools are now just part of how I work. Not revolutionary, but consistently useful.
What I’d Tell Myself Before Starting
If I could go back to Day 1, here’s what I’d say:
Start with email. It’s the easiest win and you’ll see results immediately.
Build prompts, not habits. Don’t try to “learn prompt engineering.” Just save prompts that work and reuse them.
AI is a drafter, you’re the editor. Never send or publish AI output without reading it. Ever.
Verify facts. AI is confident even when wrong. Check anything that matters.
It’s a tool, not a replacement. AI handles the parts of work that feel like drudgery. It doesn’t replace the parts that require judgment.
The time savings compound. 5 minutes saved on one email is nothing. 5 minutes saved on 10 emails a day for a year is a lot.
Try Your Own Experiment
You don’t need 30 days. Try one week:
- Use AI for every email you send
- Summarize one long document
- Prep for one meeting with AI assistance
- Ask AI to explain something you’ve been confused about
Track what works and what doesn’t. Build your own list of keepers.
The goal isn’t to use AI for everything. It’s to figure out where it actually helps your work.
The Skills I Used Most
During the 30 days, these were my most-used skills:
- Professional Email Writer — Easily #1
- Executive Summary Generator — For every long doc
- AI Tutor — Learning new concepts fast
- Meeting Notes Action Extractor — Post-meeting clarity
- System Prompt Architect — Building custom assistants
If you’re starting from zero, those five will cover most use cases.
Ready to try your own experiment? Browse all skills or start with the Lazy Person’s Guide for the essentials.