At work, we have a server of several years age that struggles to keep up with the 40GB database we use to maintain our rental software. I've tried researching this issue before, and it would seem I actually came across some things that work for people...
1. Add more database files: I kept noticing a rule that suggested 1 file per database per logical CPU; this is supposed to spread out read/write activity. This can be done without affecting any data in the database, and by running an optimize plan, it seems to begin spreading out the data to the new files.
2. tempdb can go on a RAMDrive: its recreated as needed, and its used as a buffer for other databases on the server. I found some RAMDisk software a while back: I went back to check it out again & setup a compressed NTFS 512MB drive (the SQL server instance never seems to use more than 1GB, despite allowing it up to 3 @ times). I repeated #1 by splitting the tempdb into 4 files of 120MB each. I can't seem to get the system to stop using the original file at the moment: I'm sure there's a way.
Article on tempdb optimization
3. The databases can have their indexes defragmented. We've done this in the past, but it gets harder to do as database filesizes grow: it took 17 hours last time I tried it.
MSDN article on defragging SQL indexes