The answer is, "It depends."
Geek mode follows, so try to stay awake .
On most computer systems, a storage drive is split into two chunks: one for a table-of-contents and one for your actual data. When the computer asks for data, it does a lookup in the table of contents, and then retrieves the data you asked for. If you ever formatted an 800K disk and wondered why you only got 700+K of space, the extra is set aside by the computer for that table of contents. Same thing happens with Hard Drives.
When you ask a computer to delete a file, most operating systems will just blow away the table of contents entry, and mark all the chunks of the hard drive that held the data as available for rewriting. No major operating system bothers to clear the data blocks (historically, this was due to programmer laziness and because computer time was relatively expensive). The next time the operating system tries to write data to the disk, it will ask the TOC for free space, which might overwrite your old deleted file.
This fact is what most disk recovery systems try to take advantage of. Instead of going through the table of contents, it slogs through the data chunk of the drive and tries to find things that look like old files that aren't in the table of contents. This is also why the file recovery is best run as soon as possible, to minimize the chance that something else overwrote a chunk of your file.
This becomes a technical issue over time because of something called "fragmentation." Let's say you have a 10K file that you delete, leaving a 10K space on your drive marked as ready to be overwritten. If you're lucky, the next file to be written will also be 10K, and you'll overwrite the file perfectly. If you're not lucky, the file will be smaller than 10K or larger than 10K. If it's smaller (let's say 7K), then you'll have leftover space (3K) which will be harder to find a fit a new file to. That leftover space becomes a "fragment," and over time your drive can fill up with a lot of these tiny spaces that can't be used for much. This makes the table-of-contents bigger and harder to plow through, which can slow down the computer.
If the file is bigger than 10K (let's say 22K), then the computer can try to split the file into chunks (a 10K chunk and a 12K chunk) and save them in two different sections of the hard drive. This also makes the table of contents more complex and harder to plow through, and can also slow down the computer.
Hard drive defragmenters basically take your data and re-write them to the drive to get rid of fragments and simplify your table of contents. You can do the same thing by copying all your data to a new hard drive, wiping the old one, and recopying your files back, but that's harder to do.
This also becomes a big privacy issue also. I think a privacy organization at MIT did an experiment where they bought used hard drives on eBay over the span of 6 months. An insane number of them (60% or more) had personal data on them in recoverable format, including credit card numbers, social security numbers, addresses, names, passwords, and all that other good stuff that makes identity theft easier. Some of the drives weren't even reformatted -- they could just be plugged into a PC and read. There are extra-security utilities which will deliberately write and rewrite random data on a hard drive an arbitrary number of times to wipe your personal data. There's at least one free one for every major computer out there.
Whew! Hope I didn't lose you there!