What level of test coverage to shoot for

Updated: 

Explore optimal test coverage levels for Rails applications. Learn strategies to balance thoroughness and efficiency in your testing approach.

This lesson is from The Complete Guide to Rails Testing.

"What level of test coverage should I shoot for?" is one of the questions most commonly asked by beginners to Rails testing.
My answer is that you shouldn't shoot for a particular level of test coverage. I recommend that instead you make testing a habitual part of your development workflow. A healthy level of test coverage will flow from there.
I also want to address why people ask this question. I think people ask this because they want some way of knowing whether they're testing "enough" or doing testing "right". Test coverage is one way of measuring this but I think there are better, more meaningful ways.
I think that if you're feeling the kinds of pains that missing tests leave in their absence, then you need more tests. If you're not feeling those kinds of pains, then you're good.
Pains that tell you your test coverage might be insufficient
Too many bugs
This is the obvious one. All software has bugs, but if you feel like the rate of new bugs appearing in production is unacceptably high, it may be a symptom of too little test coverage.
Too much manual testing
This is another fairly obvious one. The only alternative to using automated tests, aside from not testing at all, is to test manually.
Some level of manual testing is completely appropriate. Automated tests can never replace, for example, exploratory testing done by a human. But humans should only carry out the testing that can't be done better by a computer. Otherwise testing is much more expensive and time-consuming than it needs to be.
Infrequent deployments
Infrequent deployments can arise as a symptom of too few tests for a couple different reasons.
One possible reason is that the need for manual testing bottlenecks the deployment timing. If it takes two days for manual testers to do a full regression test on the application, you can of course only deploy a fully-tested version of your application once every two days at maximum. (And this is assuming the test suite passes every time, which is not typically the case.)
Another possible reason for infrequent deployments is the following logic: things go wrong every time we deploy, therefore things will go wrong less often if we deploy less often, so let's deploy less often. Unfortunately this decision means that problems pile up and get introduced to production all at once on each deployment instead of getting sprinkled lightly over time.
With the presence of a good test suite, deployments can happen many times a day instead of just once every few weeks or months.
Inability to refactor or make big changes
When a particular change has a small footprint, manual testing is usually good enough (although of course sometimes changes that seem like they'd have small footprints cause surprising regressions in distant areas).
When a change has a large footprint, like a Rails version upgrade or a broad refactoring, it's basically impossible to gain sufficient confidence of the safety of the change without having a solid automated test suite. So on codebases without good test coverage, these types of improvements tend not to happen.
Poor code quality
As I've written elsewhere, it's not possible to have clean, understandable code without having tests.
The reason is that refactoring is required in order to have good code and automated tests are required in order to do sufficient refactoring.
Diminished ability to hire and retain talent
Lastly, it can be hard to attract and retain high-quality developers if you lack tests and you're suffering from the ailments that result from having poor test coverage.
If a job candidate asks detailed questions about your development practices or the state of your codebase, he or she might develop a negative perception of your organization relative to the other organizations where he or she is interviewing. All other things being equal, a sophisticated and experienced engineer is probably more likely to pick some other organization that does write tests over yours which doesn't.
Even if you manage to get good people on your team, you might have trouble keeping them. It's painful to live with all the consequences of not having tests. Your smartest people are likely to be the most sensitive to these pains, and they may well seek somewhere else to work where the development experience is more pleasant.
The takeaway
I don't think test coverage is a particularly meaningful way to tell whether you're testing enough. Instead, assess the degree to which you're suffering from the above symptoms of not having enough tests. Your degree of suffering is probably proportionate to your need for more tests.
If you do this, then "good" coverage numbers are likely to follow. Last time I checked my main codebase at work my test coverage level was 96.47%.