I can answer that... since wrestlingdb is my site.
It's really no secret, and vacheroi is correct. I use PHP with Perl regular expressions to "match" the headlines and rip them out of the page. It's not an exact science and it requires a little bit of trial and error, but it works pretty well.
For example, for 1wrestling, after I've grabbed the page into a buffer, I use this code
The first line takes the page, and strips out most of header/footer type stuff, leaving the body of headlines. The second line strips out all of the tags except for the links. The third line then matches certain parts of the links and puts them into an array which I can then use to insert into a database.
Each site is a bit different, and when sites redesign, I have to go through the whole process of determining what'll work again.
Well, both, really. The OnlineOnslaught.com index page circa right now is about 32Kb (to the nearest allocation unit on my hard drive), meaning that the site incurs 1Mb of traffic for every 32 users who visit the index page alone.