Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
The Data Wrangling Workshop

You're reading from   The Data Wrangling Workshop Create your own actionable insights using data from multiple raw sources

Arrow left icon
Product type Paperback
Published in Jul 2020
Publisher Packt
ISBN-13 9781839215001
Length 576 pages
Edition 2nd Edition
Languages
Tools
Arrow right icon
Authors (3):
Arrow left icon
Dr. Tirthajyoti Sarkar Dr. Tirthajyoti Sarkar
Author Profile Icon Dr. Tirthajyoti Sarkar
Dr. Tirthajyoti Sarkar
Shubhadeep Roychowdhury Shubhadeep Roychowdhury
Author Profile Icon Shubhadeep Roychowdhury
Shubhadeep Roychowdhury
Brian Lipp Brian Lipp
Author Profile Icon Brian Lipp
Brian Lipp
Arrow right icon
View More author details
Toc

Table of Contents (11) Chapters Close

Preface
1. Introduction to Data Wrangling with Python 2. Advanced Operations on Built-In Data Structures FREE CHAPTER 3. Introduction to NumPy, Pandas, and Matplotlib 4. A Deep Dive into Data Wrangling with Python 5. Getting Comfortable with Different Kinds of Data Sources 6. Learning the Hidden Secrets of Data Wrangling 7. Advanced Web Scraping and Data Gathering 8. RDBMS and SQL 9. Applications in Business Use Cases and Conclusion of the Course Appendix

Introduction

Since data science and analytics have become key parts of our lives, the role of a data scientist has become even more important. Finding the source of data is an essential part of data science; however, it is the science part that makes you – the practitioner – truly valuable.

To practice high-quality science with data, you need to make sure it is properly sourced, cleaned, formatted, and pre-processed. This book will teach you the most essential basics of this invaluable component of the data science pipeline: data wrangling. In short, data wrangling is the process that ensures that the data is being presented in a way that is clean, accurate, formatted, and ready to be used for data analysis.

A prominent example of data wrangling with a large amount of data is the analysis conducted at the Supercomputer Center of the University of California San Diego (UCSD) every year. Wildfires are very common in California and are caused mainly by the dry weather and extreme heat, especially during the summers. Data scientists at the UCSD Supercomputer Center run an analysis every year and gather data to predict the nature and spread direction of wildfires in California. The data comes from diverse sources, such as weather stations, sensors in the forest, fire stations, satellite imagery, and Twitter feeds. However, this data might be incomplete or missing.

After collecting the data from various sources, if it is not cleaned and formatted using ways including scaling numbers and removing unwanted characters in strings, it could result in erroneous data. In cases where we might get a flawed analysis, we might need to reformat the data from JavaScript Object Notation (JSON) into Comma Separated Value (CSV); we may also need the numbers to be normalized, that is, centered and scaled with relation to themselves. Processing data in such a way might be required when we feed data to certain machine learning models.

This is an example of how data wrangling and data science can prove to be helpful and relevant. This chapter will discuss the fundamentals of data wrangling. Let's get started.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image