So, a while back, I made a popular post on Reddit with some tips on how I used Cursor because I never ran into any limits despite being one of its heaviest users (in a region).
Now before I am being called a bot again.
I do not agree with Cursors handling of the limit and the pricing changes.
While I get why they did it, their approach erased my sympathy. Had they been open and transparent, I’d have defended them (and I think most users would’ve too). Instead, they quietly changed things, repeatedly, and rightfully lost user trust.
But this post isn’t about that, it’s about why Cursor still gives me immense productivity boosts.
It’s still a great product; for some projects and some workflows, it’s still my favorite. However, like many of you, I had to adjust. But not as much as I thought I’d have to.
For some things I stopped using Cursor, but mostly, my existing system was already token efficient and it only required minimal adjustments.
I still use it heavily. Somewhere down the line I was told I’d hit the Claude 4 limit on the 27th of July (about a week before my reset).
I switched mostly to auto and I think in part of how I use Cursor, I haven’t felt too much of a degradation in quality (it’s there no doubt, but much less than I thought there would be).
Note, just like in my previous post, I have usage-based pricing off. I am never on Max mode and aside from Gemini, I use no thinking models.
I have ChatGPT, Claude and Gemini subscriptions. You don’t need any of these to apply any of the the workflows and tricks that I’ll share, though it won’t hurt, you’ll see why.
Start your plan outside of Cursor
I pretty much always start my plan outside of Cursor. Either directly in ChatGPT, Claude or Gemini or with Claude Code or Gemini CLI.
My first prompt is unstructured. I often use voice; I just dump all that’s in my head or, if I have notes from Obsidian or my Bookmark/Notes system, I copy it over.
Then I use an “old” prompting technique.
I tell the Model to ask me questions and poke holes in my reasoning or to refine the Idea with me.
But I don’t do this in a single thread; I have several. There are different questions, and they should lead to different goals.
I have at least 3 threads:
1. Ask me questions until we have a clear and structured description of my project
2. Ask me questions until we have described all the functionality and purpose of the project.
3. Ask me questions to poke holes in my idea and find issues with my reasoning or plan.
Number 3 I use on 1 and 2.
Note here, I do not create endlessly long chats. I don’t actually answer them in the chat.
Instead, I refine my original prompt. I.e. I go back to edit and resend the prompt or just start a new chat.
Every time there are new questions but fewer and fewer and at one point when I feel happy.
I’ll ask it to create a plan, and then I will tell it to critically review the plan a couple of times.
Sometimes I’d get input from another model. I.e. I ask Gemini to review ChatGPT’s plan or vice versa.
Don’t code yet
I always tell models that these plans are implementation-free. Describe code with words and abstractions, inputs and outputs, functionality and effects.
This is also why, besides the context window, and instead of answering in chat, I go back and refine the original prompt to answer questions it had.
The longer you go on, the more likely ANY of these models will ignore this rule.
Now, here is where I usually take the plan to cursor.
I have some filestructure.md or project-structure.md that I defined with the models before and then I edit it.
Sometimes I will actually create the structure myself but most of the time I use cursor auto mode.
It never fails and as for now, that’s still free and unlimited (right? I am actually not sure, after reading some posts last week but I haven’t hit any limits yet).
Milestones
Now after I have the general plan (from the chats before) and structure in place, I’ll flesh out a Milestone plan.
Here, I use Gemini 2.5 Pro.
Usually I ask it to create 4-5 milestones, then repeat the question-asking game.
Then, I go to each milestone and I ask Gemini (via cursor or CLI) to create a detailed to-do list.
Todos
”Break everything down into testable steps, propose a detailed plan in the todo.md for each step in milestone X, ONLY IMPLEMENT CODE AFTER YOU’VE RECEIVED CONFIRMATION FROM ME”.
I have a local prompt kit, and this one changes from time to time. This last all caps is sometimes necessary and sometimes not.
I keep it just in case. I really hate when it just starts to code, and that definitely keeps any model from it.
Then, I’ll edit the todo.md manually, it’s hardest here to have it not be too ambitious.
Usually, I’ll edit Ctrl+K inline to break the tasks it created into 2-3 tasks.
Or, I just write it myself (since prompting and writing here is pretty much equal in time spent).
It’s always one functionality with a maximum 1 side effect per task.
The first task is always to create placeholder functions that define input and output clearly.
This task usually creates a few files. But just skeleton functions with comments and references to other files and their contents.
If you have experience with GitHub Copilot, this might feel familiar.
When Copilot first came out, I couldn’t get it to work until I met someone who showed me how.
This is the same workflow.
Now, here is where I sometimes switch from Auto to Sonnet 4.
Most of the time I still stay inline but if it’s not too complex, I use Sonnet 4 to fill it all out at once.
Review each line of code and provide feedback
I keep a project-journal.md in every project.
After every edit that Cursor makes, I go over it. If I see patterns I do not like, I ask it to add them to the project-journal.md.
I’m constantly experimenting with how I use the project-journal.md, but one rule stays: always read it. Usually, I stick to bullet points; occasionally, after several milestones, I ask Cursor to tidy it up. It helps minimally, but it’s not worth overthinking.
End Chat after every task.
When one todo is done, I find it tempting to just move on, even if it’s again, just a small change.
No. I end the chat, and I do my end of chat protocol:
When I finish a task or want to start a new chat within a task, I ask Cursor to update the project-journal, or milestones.md or the todo.md with what it did.
I am not consistent with what file I use here because I haven’t found what works best.
It’s bullet points, but on larger projects it’s probably better to move that to milestones.md or todo.md that isn’t always read. To save context and not confuse it (see prompting tips below).
You can however sometimes tell it to add a line to the todo for the next item. Consider what’s the next task and what is relevant from what we just did, then briefly explain it there.
I.e. in task n:
We what I did in task n-1 is x in file y to accomplish z, now you could use that to do task n.
Tab Completion
Cursor’s tab completion is absolutely bonkers.
Note the skeletons I created in my plans and Todos.
It’s not rare that I can just tab through it and have it fully implemented.
Now, most of you will think, wow, free. Which is a significant benefit.
But the biggest?
It’s fast.
Now here is a small caveat. I’m sure Cursor slows down your requests, either on purpose or because of using a proxy and managing load through it.
And I believe they likely give priority to higher-tier users, and I’m only on pro.
Of all things, since it started, this is still my biggest gripe with Cursor. Sometimes it’s unbearably slow.
Prompting Tips
I will not write a prompt guide here (I did that a few years ago, and it’s horribly out of date now, as this will likely become too). But I’ll share a few general techniques of what works for me and I’ll think will stay useful for a while because of how LLMs work.
Positive Prompts only.
Instead of “Do not implement code until I approve your plan”
Say: “First propose a plan and ONLY implement code after I have approved your plan.”
Be careful about your sentence structure. If possible, phrase it in the same order you want the model to execute.
I.e. “Propose a plan and ONLY implement code after I have approved your plan.” Instead of:
”Implement code only after I have approved your plan”.
Instead of telling it what not to do, tell it what to do.
Why? Because the plan should come before.
It works for me; positive prompting makes sense to me.
This makes sense because your prompt is what the next token’s probability is conditioned on.
And the second token the model outputs is conditioned on your prompt + first token.
You want to shape these probabilities into a clear landscape with tall peaks and low flat valleys.
For existing or larger projects I use Claude Code or Gemini CLI to plan.
I don’t always /init, but I always make the macro plan and detailed tasks together with Cursor.
Sometimes I try to break the a step down into independent tasks. If doable, you can work in parallel, but here you have to be very specific.
Most of the time, it’s not worth the headache.
Often after a task in the todo is done, I ask Claude Code to review the implementation critically, propose a detailed plan based on docs and journal (doesn’t have to be cursor exclusive) to improve it. Can be UI, efficiency, structure or explanations.
Side note: never use Opus with Cursor
I have never tried it, I have usage-based pricing off, so I can’t actually access any thinking models other than Gemini 2.5. Or Opus. And I wouldn’t want to. It costs nearly a month’s worth of subscriptions in API costs (with or without Cursor) to have a chat with it.
You’ll hit the limit within a few prompts.
Git Branch, Commit Often and Manually
I check out a branch; I commit often, and only merge after each milestone. I never let Cursor do it, I don’t even give it access, and I will not. I am too unskilled with Git that I wouldn’t trust myself to be able to reverse some changes it could make without a lot of work. I use Gitkraken.
Spec Mode
I am also using Kiro now, it does what I do here in some ways but imho less efficiently. The likely cause for it is that it gets ahead of itself with the plans and the back and forth I do above, it does with itself.
I haven’t yet tried out Spec Mode with Cursor but I’ll try it too. Imho it’ll likely be a waste of tokens for now and you don’t need to that within cursor, nor can you do it better there yet than using CLI’s or just the chat interface.
Choose the the right tool for the job. Or the best tool at your disposal.
Some extra tips
Images
Did you know you can upload images? That sometimes does help a lot with layout and styling, or as a guide for UI/UX.
Libraries
Ask Cursor to use libraries.
Research that yourself, find out what’s best, trust me it’s worth YOUR effort.
Then if they have docs, copy them over as markdown files (create them if they don’t have llms.txt) and split them into chapters that you can reference.
Remember how we used to code? We go check the documentation. Usually a couple of times until we remembered it and then again when it updated.
AI isn’t magic. It has a training cut off, it is likely not up to date. It may not find the right docs and even if it does, it just wastes your tokens and time. You can give it the docs it needs without it having to search the whole internet.
Scope Guardrails
Stay within scope. Only edit file x, function x, if you want to go beyond to add new functions/files propose a plan and state your reasoning.
Tests
Ask it to write tests and let you review the outcome. Don’t tell cursor to pass the tests, tell it to write tests that can pinpoint what goes wrong and then tell what goes wrong where. Possibly create a new task and move to a new chat. Make a todolist just to fix that bug and use an entirely new chat. When you fixed it, move back to Chat 1.
Diff View
When reviewing or when something unexpected happens. This feature is so smooth.
This is a very insightful post 💯💯. I find comparisons between ai tools like this very helpful.
Sample prompts (formatted in quote/ code blocks) along with samples responses could be beneficial.