Fast scraping is considered a bad practice. Continuously pounding a website for pages can burn up CPU and bandwidth, and a robust site will identify you doing this and block your IP. And if you are unlucky, you might get a nasty letter for violating terms of service!
The technique of delaying requests in your crawler depends upon how your crawler is implemented. If you are using Scrapy, then you can set a parameter that informs the crawler how long to wait between requests. In a simple crawler just sequentially processing URLs in a list, you can insert a thread.sleep statement.
Things can get more complicated if you have implemented a distributed cluster of crawlers that spread the load of page requests, such as using a message queue with competing consumers. That can have a number of different solutions, which are beyond the scope provided in this context...