Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Go Web Scraping Quick Start Guide

You're reading from   Go Web Scraping Quick Start Guide Implement the power of Go to scrape and crawl data from the web

Arrow left icon
Product type Paperback
Published in Jan 2019
Publisher Packt
ISBN-13 9781789615708
Length 132 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Vincent Smith Vincent Smith
Author Profile Icon Vincent Smith
Vincent Smith
Arrow right icon
View More author details
Toc

Why is Go a good fit for web scraping?

The architecture of the Go programming language, as well as its standard libraries, make it a great choice for building web scrapers that are fast, scalable, and maintainable. Go is a statically typed, garbage-collected language with a syntax closer to C/C++. The syntax of the language will feel very familiar to developers coming from object-oriented programming languages. Go also has a few functional programming elements as well, such as higher-order functions. With all that being said, there are three main reasons why Go is a great fit for web scraping:

Go is fast

Speed is one of the primary objectives of the Go programming language. Many benchmarks put the speed of Go on par with that of C++, Java, and Rust, and miles ahead of languages such as Python and Ruby. Benchmark tests should always be considered with a bit of skepticism, but Go consistently stands out as a language with extremely high-performance numbers. This speed is typically coupled with a low resource footprint, as the runtime is very lightweight and does not use much RAM. One of the hidden benefits of this is being able to run Go programs on smaller machines, or to run multiple instances on the same machine, without significant overhead. This reduces the cost of operating a web scraper at larger scales.

This speed is inherently important in building web scrapers, and becomes more noticeable at larger scales. Take, for example, a web scraper that requires two minutes to scrape a page; you could theoretically process 720 pages in a day. If you were able to reduce that time to one minute per page, you would double the amount of pages per day to 1,440! Better yet, this would be done at the same cost. The speed and efficiency of Go allow you to do more with less.

Go is safe

One of the contributing factors to its speed is the fact that Go is statically typed. This makes the language ideal for building systems at a large scale and being confident in how your program will run in production. Also, since Go programs are built with a compiler instead of being run with an interpreter, it allows you to catch more bugs at compile time and greatly reduces the dreaded runtime errors.

This safety net is also extended to the Go garbage collector. Garbage collection means that you do not need to manually allocate and deallocate memory. This helps prevent memory leaks that might occur from mishandling objects in your code. Some may argue that garbage collection impedes the performance of your application, however, the Go garbage collector adds very little overhead in terms of interfering with your code execution. Many source report that the pauses caused by Go's garbage collector are less than one millisecond. In most cases, it's a very small price to pay to avoid chasing down memory leaks in the future. This certainly holds true for web scrapers.

As web scrapers grow in both size and complexity, it can be difficult to track all of the errors that may occur during processing. Thinking on the scale of processing thousands of web pages per day, one small bug could cause significantly affect the collection of data. At the end of the day, data missed is money lost, so preventing as many known errors as possible before the system is running is critical to your system.

Go is simple

Beyond the architecture of the Go programming language itself, the standard library offers all the right packages you need to make web scraping easy. Go offers a built-in HTTP client in the net/http package that is fully-featured out of the box, but also allows for a lot of customization. Making an HTTP request is as simple, as follows:

http.Get("http://example.com")

Also a part of the net/http package are utilities to structure HTTP requests, HTTP responses, and all of the HTTP status codes, which we will dive into later in this book. You will rarely need any third-party packages to handle communication with web servers. The Go standard library also has tools to help analyze HTTP requests, quickly consume HTTP response bodies, and debug the requests and responses in your web scraper. The HTTP client in the net/http package is also very configurable, letting you tune special parameters and methods to suit your specific needs. This typically will not need to be done, but the option exists if you encounter such a situation.

This simplicity will help eliminate some of the guesswork of writing code. You will not need to determine the best way to make an HTTP request; Go has already worked it out and provided you with the best tools you need to get the job done. Even when you need more than just the standard library, the Go community has built tools that follow the same culture of simplicity. This certainly makes integrating third-party libraries an easy task.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image