Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Scalable Data Architecture with Java
Scalable Data Architecture with Java

Scalable Data Architecture with Java: Build efficient enterprise-grade data architecting solutions using Java

Arrow left icon
Profile Icon Sinchan Banerjee
Arrow right icon
€18.99 per month
Full star icon Full star icon Full star icon Full star icon Half star icon 4.7 (6 Ratings)
Paperback Sep 2022 382 pages 1st Edition
eBook
€19.99 €28.99
Paperback
€35.99
Subscription
Free Trial
Renews at €18.99p/m
Arrow left icon
Profile Icon Sinchan Banerjee
Arrow right icon
€18.99 per month
Full star icon Full star icon Full star icon Full star icon Half star icon 4.7 (6 Ratings)
Paperback Sep 2022 382 pages 1st Edition
eBook
€19.99 €28.99
Paperback
€35.99
Subscription
Free Trial
Renews at €18.99p/m
eBook
€19.99 €28.99
Paperback
€35.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing
Table of content icon View table of contents Preview book icon Preview Book

Scalable Data Architecture with Java

Basics of Modern Data Architecture

With the advent of the 21st century, due to more and more internet usage and more powerful data insight tools and technologies emerging, there has been a data explosion, and data has become the new gold. This has implied an increased demand for useful and actionable data, as well as the need for quality data engineering solutions. However, architecting and building scalable, reliable, and secure data engineering solutions is often complicated and challenging.

A poorly architected solution often fails to meet the needs of the business. Either the data quality is poor, it fails to meet the SLAs, or it’s not sustainable or scalable as the data grows in production. To help data engineers and architects build better solutions, every year, dozens of open source and preoperatory tools get released. Even a well-designed solution sometimes fails because of a poor choice or implementation of the tools.

This book discusses various architectural patterns, tools, and technologies with step-by-step hands-on explanations to help an architect choose the most suitable solution and technology stack to solve a data engineering problem. Specifically, it focuses on tips and tricks to make architectural decisions easier. It also covers other essential skills that a data architect requires such as data governance, data security, performance engineering, and effective architectural presentation to customers or upper management.

In this chapter, we will explore the landscape of data engineering and the basic features of data in modern business ecosystems. We will cover various categories of modern data engineering problems that a data architect tries to solve. Then, we will learn about the roles and responsibilities of a Java data architect. We will also discuss the challenges that a data architect faces while designing a data engineering solution. Finally, we will provide an overview of the techniques and tools that we’ll discuss in this book and how they will help an aspiring data architect do their job more efficiently and be more productive.

In this chapter, we’re going to cover the following main topics:

  • Exploring the landscape of data engineering
  • Responsibilities and challenges of a Java data architect
  • Techniques to mitigate those challenges

Exploring the landscape of data engineering

In this section, you will learn what data engineering is and why it is needed. You will also learn about the various categories of data engineering problems and some real-world scenarios where they are found. It is important to understand the varied nature of data engineering problems before you learn how to architect solutions for such real-world problems.

What is data engineering?

By definition, data engineering is the branch of software engineering that specializes in collecting, analyzing, transforming, and storing data in a usable and actionable form.

With the growth of social platforms, search engines, and online marketplaces, there has been an exponential increase in the rate of data generation. In 2020 alone, around 2,500 petabytes of data was generated by humans each day. It is estimated that this figure will go up to 468 exabytes per day by 2025. The high volume and availability of data have enabled rapid technological development in AI and data analytics. This has led businesses, corporations, and governments to gather insights like never before to give customers a better experience of their services.

However, raw data usually is seldom used. As a result, there is an increased demand for creating usable data, which is secure and reliable. Data engineering revolves around creating scalable solutions to collect the raw data and then analyze, validate, transform, and store it in a usable and actionable format. Optionally, in certain scenarios and organizations, in modern data engineering, businesses expect usable and actionable data to be published as a service.

Before we dive deeper, let’s explore a few practical use cases of data engineering:

  • Use case 1: American Express (Amex) is a leading credit card provider, but it has a requirement to group customers with similar spending behavior together. This ensures that Amex can generate personalized offers and discounts for targeted customers. To do this, Amex needs to run a clustering algorithm on the data. However, the data is collected from various sources. A few data flows from MobileApp, a few flows from different Salesforce organizations such as sales and marketing, and a few data flows from logs and JSON events will be required. This data is known as raw data, and it can contain junk characters, missing fields, special characters, and sometimes unstructured data such as log files. Here, the data engineering team ingests that data from different sources, cleans it, transforms it, and stores it in a usable structured format. This ensures that the application that performs clustering can run on clean and sorted data.
  • Use case 2: A health insurance provider receives data from multiple sources. This data comes from various consumer-facing applications, third-party vendors, Google Analytics, other marketing platforms, and mainframe batch jobs. However, the company wants a single data repository to be created that can serve different teams as the source of clean and sorted data. Such a requirement can be implemented with the help of data engineering.

Now that we understand data engineering, let’s look at a few of its basic concepts. We will start by looking at the dimensions of data.

Dimensions of data

Any discussion on data engineering is incomplete without talking about the dimensions of data. The dimensions of data are some basic characteristics by which the nature of data can be analyzed. The starting point of data engineering is analyzing and understanding the data.

To successfully analyze and build a data-oriented solution, the four Vs of modern data analysis are very important. These can be seen in the following diagram:

Figure 1.1 – Dimensions of data

Figure 1.1 – Dimensions of data

Let’s take a look at each of these Vs in detail:

  • Volume: This refers to the size of data. The size of the data can be as small as a few bytes to as big as a few hundred petabytes. Volume analysis usually involves understanding the size of the whole dataset or the size of a single data record or event. Understanding the size is essential in choosing the type of technologies and infrastructure sizing decisions to process and store the data.
  • Velocity: This refers to the speed at which data is getting generated. High-velocity data requires distributed processing. Analyzing the speed of data generation is especially critical for scenarios where businesses require usable data to be made available in real-time or near-real-time.
  • Variety: This refers to the various variations in the format in which the data source can generate the data. Usually, they can be one of the three following types:
    • Structured: Structured data is where the number of columns, their data types, and their positions are fixed. All classical datasets that fit neatly in the relational data model are perfect examples of structured data.
    • Unstructured: These datasets don’t conform to a specific structure. Each record in such a dataset can have any number of columns in any arbitrary format. Examples include audio and video files.
    • Semi-structured: Semi-structured data has a structure, but the order of the columns and the presence of a column in each record is optional. A classical example of such a dataset is any hierarchical data source, such as a .json or a .xml file.
  • Veracity: This refers to the trustworthiness of the data. In simple terms, it is related to the quality of the data. Analyzing the noise of data is as important as analyzing any other aspect of the data. This is because this analysis helps create a robust processing rule that ultimately determines how successful a data engineering solution is. Many well-engineered and designed data engineering solutions fail in production due to a lack of understanding about the quality and noise of the source data.

Now that we have a fair idea of the characteristics by which the nature of data can be analyzed, let’s understand how they play a vital role in different types of data engineering problems.

Types of data engineering problems

Broadly speaking, the kinds of problems that data engineers solve can be classified into two basic types:

  • Processing problems
  • Publishing problems

Let’s take a look at these problems in more detail.

Processing problems

The problems that are related to collecting raw data or events, processing them, and storing them in a usable or actionable data format are broadly categorized as processing problems. Typical use cases can be a data ingestion problem such as Extract, Transform, Load (ETL) or a data analytics problem such as generating a year-on-year report.

Again, processing problems can be divided into three major categories, as follows:

  • Batch processing
  • Real-time processing
  • Near real-time processing

This can be seen in the following diagram:

Figure 1.2 – Categories of processing problems

Figure 1.2 – Categories of processing problems

Let’s take a look at each one of these categories in detail.

Batch processing

If the SLA of processing is more than 1 hour (for example, if the processing needs to be done once in 2 hours, once daily, once weekly, or once biweekly), then such a problem is called a batch processing problem. This is because, when a system processes data at a longer time interval, it usually processes a batch of data records and not a single record/event. Hence, such processing is called batch processing:

Figure 1.3 – Batch processing problem

Figure 1.3 – Batch processing problem

Usually, a batch processing solution depends on the volume of data. If the data volume is more than tens of terabytes, usually, it needs to be processed as big data. Also, since big data processes are schedule-driven, a workflow manager or schedular needs to run its jobs. We will discuss batch processing in more detail later in this book.

Real-time processing

A real-time processing problem is a use case where raw data/events are to be processed on the fly and the response or the processing outcome should be available within seconds, or at most within 2 to 5 minutes.

As shown in the following diagram, a real-time process receives data in the form of an event stream and immediately processes it. Then, it either sends the processed event to a sink or to another stream of events to be processed further. Since this kind of processing happens on a stream of events, this is known as real-time stream processing:

Figure 1.4 – Real-time stream processing

Figure 1.4 – Real-time stream processing

As shown in Figure 1.4, event E0 gets processed and sent out by the streaming application, while events E1, E2 and E3 are waiting to be processed in the queue. At t1, event E1 also gets processed, showing continuous processing of events by streaming application

An event can generate at any time (24/7), which creates a new kind of problem. If the producer application of an event directly sends the event to a consumer, there is a chance of event loss, unless the consumer application is running 24/7. Even bringing down the consumer application for maintenance or upgrades isn’t possible, which means there should be zero downtime for the consumer application. However, any application with zero downtime is not realistic. Such a model of communication between applications is called point-to-point communication.

Another challenge in point-to-point communication for real-time problems is the speed of processing as this should be always equal to or greater than that of a producer. Otherwise, there will be a loss of events or a possible memory overrun of the consumer. So, instead of directly sending events to the consumer application, they are sent asynchronously to an Event Bus or a Message Bus. An Event Bus is a high availability container that can hold events such as a queue or a topic. This pattern of sending and receiving data asynchronously by introducing a high availability Event Bus in between is called the Pub-Sub framework.

The following are some important terms related to real-time processing problems:

  • Events: This can be defined as a data packet generated as a result of an action, a trigger, or an occurrence. They are also popularly known as messages in the Pub-Sub framework.
  • Producer: A system or application that produces and sends events to a Message Bus is called a publisher or a producer.
  • Consumer: A system or application that consumes events from a Message Bus to process is called a consumer or a subscriber.
  • Queue: This has a single producer and a single consumer. Once a message/event is consumed by a consumer, that event is removed from the queue. As an analogy, it’s like an SMS or an email sent to you by one of your friends.
  • Topic: Unlike a queue, a topic can have multiple consumers and producers. It’s a broadcasting channel. As an analogy, it’s like a TV channel such as HBO, where multiple producers are hosting their show, and if you have subscribed to that channel, you will be able to watch any of those shows.

A real-world example of a real-time problem is credit card fraud detection, where you might have experienced an automated confirmation call to verify the authenticity of a transaction from your bank, if any transaction seems suspicious while being executed.

Near-real-time processing

Near-real-time processing, as its name suggests, is a problem whose response or processing time doesn’t need to be as fast as real time but should be less than 1 hour. One of the features of near-real-time processing is that it processes events in micro batches. For example, a near-real-time process may process data in a batch interval of every 5 minutes, a batch size of every 100 records, or a combination of both (whichever condition is satisfied first).

At time tx, all events (E1, E2 and E3) that are generated between t0 and tx are processed together by near real-time processing job. Similarly all events (E4, E5 and E6) between time tx and tn are processed together.

Figure 1.5 – Near-real-time processing

Figure 1.5 – Near-real-time processing

Typical near-real-time use cases are recommendation problems such as product recommendations for services such as Amazon or video recommendations for services such as YouTube and Netflix.

Publishing problems

Publishing problems deal with publishing the processed data to different businesses and teams so that data is easily available with proper security and data governance. Since the main goal of the publishing problem is to expose the data to a downstream system or an external application, having extremely robust data security and governance is essential.

Usually, in modern data architectures, data is published in one of three ways:

  • Sorted data repositories
  • Web services
  • Visualizations

Let’s take a closer look at each.

Sorted data repositories

Sorted data repositories is a common term used for various kinds of repositories that are used to store processed data. This is usable and actionable data and can be directly queried by businesses, analytics teams, and other downstream applications for their use cases. They are broadly divided into three types:

  • Data warehouse
  • Data lake
  • Data hub

A data warehouse is a central repository of integrated and structured data that’s mainly used for reporting, data analysis, and Business Intelligence (BI). A data lake consists of structured and unstructured data, which is mainly used for data preparation, reporting, advanced analytics, data science, and Machine Learning (ML). A data hub is the central repository of trusted, governed, and shared data, which enables seamless data sharing between diverse endpoints and connects business applications to analytic structures such as data warehouses and data lakes.

Web services

Another publishing pattern is where data is published as a service, popularly known as Data as a Service. This data publishing pattern has many advantages as it enables security, immutability, and governance by design. Nowadays, as cloud technologies and GraphQL are becoming popular, Data-as-a-Service is getting a lot of traction in the industry.

The two popular mechanisms of publishing Data as a Service are as follows:

  • REST
  • GraphQL

We will discuss these techniques in detail later in this book.

Visualization

There’s a popular saying: A picture is worth a thousand words. Visualization is a technique by which reports, analytics, and statistics about the data are captured visually in graphs and charts.

Visualization is helpful for businesses and leadership to understand, analyze, and get an overview of the data flowing in their business. This helps a lot in decision-making and business planning.

A few of the most common and popular visualization tools are as follows:

  • Tableau is a proprietary data visualization tool. This tool comes with multiple source connectors to import data into it and create easy fast visualization using drag-and-drop visualization components such as graphs and charts. You can find out more about this product at https://www.tableau.com/.
  • Microsoft Power BI is a proprietary tool from Microsoft that allows you to collect data from various data sources to connect and create powerful dashboards and visualizations for BI. While both Tableau and Power BI offer data visualization and BI, Tableau is more suited for seasoned data analysts, while Power BI is useful for non-technical or inexperienced users. Also, Tableau works better with huge volumes of data compared to Power BI. You can find out more about this product at https://powerbi.microsoft.com/.
  • Elasticsearch-Kibana is an open source tool whose source code is open source and has free versions for on-premise installations and paid subscriptions for cloud installation. This tool helps you ingest data from any data source into Elasticsearch and create visualizations and dashboards using Kibana. Elasticsearch is a powerful text-based Lucene search engine that not only stores the data but enables various kinds of data aggregation and analysis (including ML analysis). Kibana is a dashboarding tool that works together with Elasticsearch to create very powerful and useful visualizations. You can find out more about these products at https://www.elastic.co/elastic-stack/.

Important note

A Lucene index is a full-text inverse index. This index is extremely powerful and fast for text-based searches and is the core indexing technology behind most search engines. A Lucene index takes all the documents, splits them into words or tokens, and then creates an index for each word.

  • Apache Superset is a completely open source data visualization tool (developed by Airbnb). It is a powerful dashboarding tool and is completely free, but its data source connector support is limited, mostly to SQL databases. A few interesting features are its built-in role-based data access, an API for customization, and extendibility to support new visualization plugins. You can find out more about this product at https://superset.apache.org/.

While we have briefly discussed a few of the visualization tools available in the market, there are many visualizations and competitive alternatives available. Discussing data visualization in more depth is beyond the scope of this book.

So far, we have provided an overview of data engineering and the various types of data engineering problems. In the next section, we will explore what role a Java data architect plays in the data engineering landscape.

Responsibilities and challenges of a Java data architect

Data architects are senior technical leaders who map business requirements to technical requirements, envision technical solutions to solve business problems, and establish data standards and principles. Data architects play a unique role, where they understand both the business and technology. They are like the Janus of business and technology, where on one hand they can look, understand, and communicate with the business, and on the other, they do the same with technology. Data architects create processes that are used to plan, specify, enable, create, acquire, maintain, use, archive, retrieve, control, and purge data. According to DAMMA’s data management body of knowledge, a data architect provides a standard common business vocabulary, expresses strategic requirements, outlines high-level integrated designs to meet those requirements, and aligns with the enterprise strategy and related business architecture.

The following diagram shows the cross-cutting concerns that a data architect handles:

Figure 1.6 – Cross-cutting concerns of a data architect

Figure 1.6 – Cross-cutting concerns of a data architect

The typical responsibilities of a Java data architect are as follows:

  • Interpreting business requirements into technical specifications, which includes data storage and integration patterns, databases, platforms, streams, transformations, and the technology stack
  • Establishing the architectural framework, standards, and principles
  • Developing and designing reference architectures that are used as patterns that can be followed by others to create and improve data systems
  • Defining data flows and their governance principles
  • Recommending the most suitable solutions, along with their technology stacks, while considering scalability, performance, resource availability, and cost
  • Coordinating and collaborating with multiple departments, stakeholders, partners, and external vendors

In the real world, a data architect is supposed to play a combination of three disparate roles, as shown in the following diagram:

Figure 1.7 – Multifaced role of a data architect

Figure 1.7 – Multifaced role of a data architect

Let’s look at these three architectural roles in more detail:

  • Data architectural gatekeeper: An architectural gatekeeper is a person or a role that ensures the data model is following the necessary standards and that the architecture is following the proper architectural principles. They look for any gaps in terms of the solution or business expectations. Here, a data architect takes a negative role in finding faults or gaps in the product or solution design and delivery (including a lack of or any gap in best practices in the data model, architecture, implementation techniques, testing procedures, continuous integration/continuous delivery (CI/CD) efforts, or business expectations).
  • Data advisor: A data advisor is a data architect that focuses more on finding solutions rather than finding a problem. A data advisor highlights issues, but more importantly, they show an opportunity or propose a solution for them. A data advisor should understand the technical as well as the business aspect of a problem and solution and should be able to advise to improve the solution.
  • Business executive: Apart from the technical roles that a data architect plays, the data architect needs to play an executive role as well. As stated earlier, the data architect is like the Janus of business and technology, so they are expected to be a great communicator and sales executive who can sell their idea or solution (that is technical) to nontechnical folks. Often, a data architect needs to present elevator speeches to higher leadership to show opportunities and convince them of a solution for business problems. To be successful in this role, a data architect must think like a business executive – What is the ROI? Or what is there for me in it? How much can we save in terms of time and money with this solution or opportunity? Also, a data architect should be concise and articulate in presenting their idea so that it creates immediate interest among the listeners (mostly business executives, clients, or investors).

Let’s understand the difference between a data architect and data engineer.

Data architect versus data engineer

The data architect and data engineer are related roles. A data architect visualizes, conceptualizes, and creates the blueprint of the data engineering solution and framework, while the data engineer takes the blueprint and implements the solution.

Data architects are responsible for putting data chaos in order, generated by enormous piles of business data. Each data analytics or data science team requires a data architect who can visualize and design the data framework to create clean, analyzed, managed, formatted, and secure data. This framework can be utilized further by data engineers, data analysts, and data scientists for their work.

Challenges of a data architect

Data architects face a lot of challenges in their day-to-day work. We will be focusing on the main challenges that a data architect faces on a day-to-day basis:

  • Choosing the right architectural pattern
  • Choosing the best-fit technology stack
  • Lack of actionable data governance
  • Recommending and communicating effectively to leadership

Let’s take a closer look.

Choosing the right architectural pattern

A single data engineering problem can be solved in many ways. However, with the ever-evolving expectations of customers and the evolution of new technologies, choosing the correct architectural pattern has become more challenging. What is more interesting is that with the changing technological landscape, the need for agility and extensibility in architecture has increased many folds to avoid unnecessary costs and sustainability of architecture over time.

Choosing the best-fit technology stack

One of the complex problems that a data architect needs to figure out is the technology stack. Even when you have created a very well-architected solution, whether your solution will fly or flop will depend on the technology stack you are choosing and how you are planning to use it. As more and more tools, technologies, databases, and frameworks are developed, a big challenge remains for data architects to choose an optimum tech stack that can help create a scalable, reliable, and robust solution. Often, a data architect needs to take into account other non-technical factors as well, such as the future growth prediction of the tool, the market availability of skilled resources for those tools, vendor lock-in, cost, and community support options.

Lack of actionable data governance

Data governance is a buzzword in data businesses, but what does it mean? Governance is a broad area that includes both workflows and toolsets to govern data. If either the tools or the workflow process has limitations or is not present, then data governance is incomplete. When we talk about actionable governance, we mean the following elements:

  • Integrating data governance with all data engineering systems to maintain standard metadata, including traceability of events and logs for a standard timeline
  • Integrating data governance concerning all the security policies and standards
  • Role-based and user-based access management policies on all data elements and systems
  • Adherence to defined metrics that are tracked continually
  • Integrating data governance and the data architecture

Data governance should always be aligned with strategic and organizational goals.

Recommending and communicating effectively to leadership

Creating an optimal architecture and the correct set of tools is a challenging task, but it never is enough, unless and until they are not put into practice. One of the hats that a data architect often needs to wear is that of a sales executive who needs to sell their solution to the business executive or upper leadership. These are not usually technical people and they don’t have a lot of time. Data architects, most of whom have strong technical backgrounds, face the daunting task of communicating and selling their idea to these people. To convince them about the opportunity and the idea, a data architect needs to back them up with proper decision metrics and information that can align that opportunity to the broader business goals of the organization.

So far, we have seen the role of a data architect and the common problems that they face. In the next section, we will provide an overview of how a data architect mitigates those challenges on a day-to-day basis.

Techniques to mitigate those challenges

In this section, we will discuss how a data architect can mitigate the aforementioned challenges. To understand the mitigation plan, it is important to understand what the life cycle of a data architecture looks like and how a data architect contributes to it. The following diagram shows the life cycle of a data architecture:

Figure 1.8 – Life cycle of a data architecture

Figure 1.8 – Life cycle of a data architecture

The data architecture starts with defining the problem that the business is facing. Here, this is mainly identified or reported by business teams or customers. Then, the data architects work closely with the business to define the business requirements. However, in a data engineering landscape, that is not enough. In a lot of cases, there are hidden requirements or anomalies. To mitigate such problems, business analysts team up with data architects to analyze data and the current state of the system, including any existing solution, the current cost, or loss of revenue due to the problem and infrastructure where data resides. This helps refine the business requirements. Once the business requirements are more or less frozen, the data architects map the business requirements to the technical requirements.

Then, the data architect defines the standards and principles of the architecture and determines the priorities of the architecture based on the business need and budget. After that, the data architect creates the most suitable architectures, along with their proposed technology stack. In this phase, the data architects closely work with the data engineers to implement proof of concept (POCs) and evaluate the proposed solution in terms of feasibility, scalability, and performance.

Finally, the architects recommend solutions based on the evaluation results and architectural priorities defined earlier. The data architects present the proposed solutions to the business. Based on priorities such as cost, timeline, operational cost, and resource availability, feedback is received from the business and clients. It takes a few iterations to solidify and get an agreement on the architecture.

Once an agreement has been reached, the solution is implemented. Based on the implementation challenges and particular use cases, the architecture may or may not be revised or tweaked a little. Once an architecture is implemented and goes to production, it enters the maintenance and operations phase. During maintenance and operations, sometimes, feedback is provided, which might result in a few architectural improvements and changes, but they are often seldom if the solution is well-architected in the first place.

In the preceding diagram, the blue boxes indicate major involvement from a customer, a green box indicates major involvement from a data architect, a yellow box means a data architect equally shares involvement with another stakeholder, and a gray box means the data architect has the least involvement in that scenario.

Now that we have understood the life cycle of the data architecture and a data architect’s role in various phases, we will focus on how to mitigate those challenges that are faced by a data architect. This book covers how to mitigate those challenges in the following way:

  • Understanding the business data, its characteristics, and storage options:
    • Data and its characteristics were covered earlier in this chapter; it will also be covered partly in Chapter 2, Data Storage and Databases
    • Storage options will also be discussed in Chapter 2, Data Storage and Databases
  • Analyzing and defining the business problem:
    • Understanding the various kinds of data engineering problems (covered in this chapter)
    • We have provided a step-by-step analysis of how an architect should analyze a business problem, classify, and define it in Chapter 4, ETL Data Load – A Batch-Based Solution to Ingest Data in a Data Warehouse, Chapter 5, Architecting a Batch Processing Pipeline, and Chapter 6, Architecting a Real-Time Processing Pipeline
  • The challenge of choosing the right architecture. To choose the right architectural pattern, we should be aware of the following:
    • The types of data engineering problems and the dimensions of data (we discussed this in this chapter)
    • The different types of data and various data storage available (Chapter 2, Data Storage and Databases)
    • How to model and design different kinds of data while storing it in a database (Chapter 2, Data Storage and Databases)
    • Understanding various architectural patterns for data processing problems (Chapter 7, Core Architectural Design Patterns)
    • Understanding the architectural patterns of publishing the data (Section 3, Enabling Data as a Service)
  • The challenge of choosing the best-fit technology stack and data platform. To choose the correct set of tools, we need to know how to use a tool and when to use what tools we have:
    • How to choose the correct database will be discussed in Chapter 2, Data Storage and Databases
    • How to choose the correct platform will be discussed in Chapter 3, Identifying the Right Data Platform
    • A step-by-step hands-on guide to using different tools in batch processing will be covered in Chapter 4, ETL Data Load – A Batch-Based Solution to Ingest Data in a Data Warehouse, and Chapter 5, Architecting a Batch Processing Pipeline
    • A step-by-step guide to architecting real-time stream processing and choosing the correct tools will be covered in Chapter 6, Architecting a Real-Time Processing Pipeline
    • The different tools and technologies used in data publishing will be discussed in Chapter 9, Exposing MongoDB Data as a Service, and Chapter 10, Federated and Scalable DaaS with GraphQL
  • The challenge of creating a design for scalability and performance will be covered in Chapter 11, Measuring Performance and Benchmarking Your Applications. Here, we will discuss the following:
    • Performance engineering basics
    • The publishing performance benchmark
    • Performance optimization and tuning
  • The challenge of a lack of data governance. Various data governance and security principles and tools will be discussed in Chapter 8, Enabling Data Security and Governance.
  • The challenge of evaluating architectural solutions and recommending them to leadership. In the final chapter of this book (Chapter 12, Evaluating, Recommending, and Presenting Your Solution), we will use the various concepts that we have learned throughout this book to create actionable data metrics and determine the most optimized solution. Finally, we will discuss techniques that an architect can apply to effectively communicate with business stakeholders, executive leadership, and investors.

In this section, we discussed how this book can help an architect overcome the various challenges they will face and make them more effective in their role. Now, let’s summarize this chapter.

Summary

In this chapter, we learned what data engineering is and looked at a few practical examples of data engineering. Then, we covered the basics of data engineering, including the dimensions of data and the kinds of problems that are solved by data engineers. We also provided a high-level overview of various kinds of processing problems and publishing problems in a data engineering landscape. Then, we discussed the roles and responsibilities of a data architect and the kind of challenges they face. We also briefly covered the way this book will guide you to overcome challenges and dilemmas faced by a data architect and help you become a better Java data architect.

Now that you understand the basic landscape of data engineering and what this book will focus on, in the next chapter, we will walk through various data formats, data storage options, and databases and learn how to choose one for the problem at hand.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Learn how to adapt to the ever-evolving data architecture technology landscape
  • Understand how to choose the best suited technology, platform, and architecture to realize effective business value
  • Implement effective data security and governance principles

Description

Java architectural patterns and tools help architects to build reliable, scalable, and secure data engineering solutions that collect, manipulate, and publish data. This book will help you make the most of the architecting data solutions available with clear and actionable advice from an expert. You’ll start with an overview of data architecture, exploring responsibilities of a Java data architect, and learning about various data formats, data storage, databases, and data application platforms as well as how to choose them. Next, you’ll understand how to architect a batch and real-time data processing pipeline. You’ll also get to grips with the various Java data processing patterns, before progressing to data security and governance. The later chapters will show you how to publish Data as a Service and how you can architect it. Finally, you’ll focus on how to evaluate and recommend an architecture by developing performance benchmarks, estimations, and various decision metrics. By the end of this book, you’ll be able to successfully orchestrate data architecture solutions using Java and related technologies as well as to evaluate and present the most suitable solution to your clients.

Who is this book for?

Data architects, aspiring data architects, Java developers and anyone who wants to develop or optimize scalable data architecture solutions using Java will find this book useful. A basic understanding of data architecture and Java programming is required to get the best from this book.

What you will learn

  • Analyze and use the best data architecture patterns for problems
  • Understand when and how to choose Java tools for a data architecture
  • Build batch and real-time data engineering solutions using Java
  • Discover how to apply security and governance to a solution
  • Measure performance, publish benchmarks, and optimize solutions
  • Evaluate, choose, and present the best architectural alternatives
  • Understand how to publish Data as a Service using GraphQL and a REST API

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Sep 30, 2022
Length: 382 pages
Edition : 1st
Language : English
ISBN-13 : 9781801073080
Category :
Languages :
Concepts :

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing

Product Details

Publication date : Sep 30, 2022
Length: 382 pages
Edition : 1st
Language : English
ISBN-13 : 9781801073080
Category :
Languages :
Concepts :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 109.97
Domain-Driven Design with Java - A Practitioner's Guide
€31.99
Scalable Data Architecture with Java
€35.99
Hands-On Software Architecture with Java
€41.99
Total 109.97 Stars icon
Banner background image

Table of Contents

18 Chapters
Section 1 – Foundation of Data Systems Chevron down icon Chevron up icon
Chapter 1: Basics of Modern Data Architecture Chevron down icon Chevron up icon
Chapter 2: Data Storage and Databases Chevron down icon Chevron up icon
Chapter 3: Identifying the Right Data Platform Chevron down icon Chevron up icon
Section 2 – Building Data Processing Pipelines Chevron down icon Chevron up icon
Chapter 4: ETL Data Load – A Batch-Based Solution to Ingesting Data in a Data Warehouse Chevron down icon Chevron up icon
Chapter 5: Architecting a Batch Processing Pipeline Chevron down icon Chevron up icon
Chapter 6: Architecting a Real-Time Processing Pipeline Chevron down icon Chevron up icon
Chapter 7: Core Architectural Design Patterns Chevron down icon Chevron up icon
Chapter 8: Enabling Data Security and Governance Chevron down icon Chevron up icon
Section 3 – Enabling Data as a Service Chevron down icon Chevron up icon
Chapter 9: Exposing MongoDB Data as a Service Chevron down icon Chevron up icon
Chapter 10: Federated and Scalable DaaS with GraphQL Chevron down icon Chevron up icon
Section 4 – Choosing Suitable Data Architecture Chevron down icon Chevron up icon
Chapter 11: Measuring Performance and Benchmarking Your Applications Chevron down icon Chevron up icon
Chapter 12: Evaluating, Recommending, and Presenting Your Solutions Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.7
(6 Ratings)
5 star 66.7%
4 star 33.3%
3 star 0%
2 star 0%
1 star 0%
Filter icon Filter
Top Reviews

Filter reviews by




JJ Jan 25, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I found that the Author has a clear flow through the data engineering concepts, step by step adaptation and DaaS layer. It has covered the details we will need during implementation clearly, instead of a bird's eye view like most books. I would recommend for my colleagues
Amazon Verified review Amazon
Amazon Customer Nov 29, 2022
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I thoroughly enjoyed reading this book. This is one of the best technical book on how to Architect the right Batch or Real time Data Pipelines. It help us to understand the core design Patterns and DAAS implementations . Had an opportunity of closely collaborating with Sinchan(Author) on multiple projects in the past few years, With confidence of the content published with respect to data scalability, optimization, security and performance strategies mentioned in the book, this will definitely assist us in all our existing and upcoming successful implementations/deliverables. This is an essential book for all the aspiring Data Architects.
Amazon Verified review Amazon
Geraldo Netto Jan 25, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
First things first, it goes from the ground up even the big picture of current data architecture/engineering and its different roles.Then, it moves to the most fundamental concepts on data and databases like data types and formats, a bit of storage, the classical data warehouse, a bit of relational and nosql including its modelingThen, we have a bit of cloud/hadoopAt this point, we reach the data pipeline going from classical ETL, batch and stream processing, including patterns for it - plain beauty, really!!!Data security and governance - certainly one of the areas that people most neglect, unfortunatelyThe third part is about data as a service discussing MongoDB, Spring boot, GraphQL with use casesAnd by the very end of the book, we still have guidelines for performance tuning and how to present/'sell' your solutionSo, the book simply covers the process end-to-endSome highlights:- it covers the whole data workflow- it's use case based- software/tools and patterns are widely deployed meaning that data architect job will actually touch one (if not all) of the software mentioned through the book- a derivation of above, it's easier to get help with popular software/tools than 'esoteric' or very new toolsOn the other side (maybe for a second edition :))- maybe discuss a bit more about parsing data specially the trick ones like multi-line logs- on performance tuning would be interesting to have more details on other GC algos like ZGC which is the default GC algo for JDK 15 and promises to deliver a fairly decent performance under millisecond for really large heaps- while the discussion on lambda pattern is brilliant, it would be interesting to discuss a bit more about the CAP theorem and PACELC theorem (an extension of CAP)Overall, a great introduction for data architecture on java
Amazon Verified review Amazon
Steven Fernandes Jan 06, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Section 1 of the book starts with a short introduction to data engineering, the basic concepts of data engineering, and the role a Java data architect plays in data engineering. It then briefly discusses various data types, storage formats, data formats, and databases. It also discusses when to use them.It also explains how to identify the Right Data Platform, provides an overview of various platforms to deploy data pipelines, and how to choose the correct platform.Section 2 of the book gives an overview of ETL data load. It discusses how to approach, analyze, and architect an effective solution for a batch-based data ingestion problem using Spring Batch and Java. Next, it focuses on architecting a batch processing pipeline and discusses how to architect and implement a data analysis pipeline in AWS using S3, Apache Spark (Java), AWS Elastic MapReduce (EMR), and AWS Athena for a big data usecase. The book also provides a step-by-step guide to building a real-time streaming solution to predict the risk category of a loan application using Java, Kafka, andrelated technologies. It discusses common architectural patterns used to solve data engineering problems and when to use them. Section 2 ends with data governance and outlines how to apply it using a practical use case. It also briefly touches upon the topic of data security.Section 3 provides a step-by-step guide on building Data as a Service to expose MongoDB data using a REST API. It also discusses GraphQL, various GraphQL patterns, and how to publish data using GraphQL. The final section, 4, provides an overview of performance engineering, how to measure performance and create benchmarks, and how to optimize performance. It discusses evaluating and choosing the best-suited alternative among various architectures and how to present the recommended architecture effectively.
Amazon Verified review Amazon
POE Feb 09, 2023
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
This is a well-written Java book that focuses on a disparate set of topics. These topics include design patterns, data architecture (storage, databases, etc.), data processing pipelines, data security, benchmarks, data-as-a-service, and more. The book is written for seasoned Java developers. Technologies featured in the book include various AWS products, GraphQL. Kafka, Hadoop, and REST API. This book is a good edition to your Java library if you want to strengthen your data architecture knowledge with Java.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is included in a Packt subscription? Chevron down icon Chevron up icon

A subscription provides you with full access to view all Packt and licnesed content online, this includes exclusive access to Early Access titles. Depending on the tier chosen you can also earn credits and discounts to use for owning content

How can I cancel my subscription? Chevron down icon Chevron up icon

To cancel your subscription with us simply go to the account page - found in the top right of the page or at https://subscription.packtpub.com/my-account/subscription - From here you will see the ‘cancel subscription’ button in the grey box with your subscription information in.

What are credits? Chevron down icon Chevron up icon

Credits can be earned from reading 40 section of any title within the payment cycle - a month starting from the day of subscription payment. You also earn a Credit every month if you subscribe to our annual or 18 month plans. Credits can be used to buy books DRM free, the same way that you would pay for a book. Your credits can be found in the subscription homepage - subscription.packtpub.com - clicking on ‘the my’ library dropdown and selecting ‘credits’.

What happens if an Early Access Course is cancelled? Chevron down icon Chevron up icon

Projects are rarely cancelled, but sometimes it's unavoidable. If an Early Access course is cancelled or excessively delayed, you can exchange your purchase for another course. For further details, please contact us here.

Where can I send feedback about an Early Access title? Chevron down icon Chevron up icon

If you have any feedback about the product you're reading, or Early Access in general, then please fill out a contact form here and we'll make sure the feedback gets to the right team. 

Can I download the code files for Early Access titles? Chevron down icon Chevron up icon

We try to ensure that all books in Early Access have code available to use, download, and fork on GitHub. This helps us be more agile in the development of the book, and helps keep the often changing code base of new versions and new technologies as up to date as possible. Unfortunately, however, there will be rare cases when it is not possible for us to have downloadable code samples available until publication.

When we publish the book, the code files will also be available to download from the Packt website.

How accurate is the publication date? Chevron down icon Chevron up icon

The publication date is as accurate as we can be at any point in the project. Unfortunately, delays can happen. Often those delays are out of our control, such as changes to the technology code base or delays in the tech release. We do our best to give you an accurate estimate of the publication date at any given time, and as more chapters are delivered, the more accurate the delivery date will become.

How will I know when new chapters are ready? Chevron down icon Chevron up icon

We'll let you know every time there has been an update to a course that you've bought in Early Access. You'll get an email to let you know there has been a new chapter, or a change to a previous chapter. The new chapters are automatically added to your account, so you can also check back there any time you're ready and download or read them online.

I am a Packt subscriber, do I get Early Access? Chevron down icon Chevron up icon

Yes, all Early Access content is fully available through your subscription. You will need to have a paid for or active trial subscription in order to access all titles.

How is Early Access delivered? Chevron down icon Chevron up icon

Early Access is currently only available as a PDF or through our online reader. As we make changes or add new chapters, the files in your Packt account will be updated so you can download them again or view them online immediately.

How do I buy Early Access content? Chevron down icon Chevron up icon

Early Access is a way of us getting our content to you quicker, but the method of buying the Early Access course is still the same. Just find the course you want to buy, go through the check-out steps, and you’ll get a confirmation email from us with information and a link to the relevant Early Access courses.

What is Early Access? Chevron down icon Chevron up icon

Keeping up to date with the latest technology is difficult; new versions, new frameworks, new techniques. This feature gives you a head-start to our content, as it's being created. With Early Access you'll receive each chapter as it's written, and get regular updates throughout the product's development, as well as the final course as soon as it's ready.We created Early Access as a means of giving you the information you need, as soon as it's available. As we go through the process of developing a course, 99% of it can be ready but we can't publish until that last 1% falls in to place. Early Access helps to unlock the potential of our content early, to help you start your learning when you need it most. You not only get access to every chapter as it's delivered, edited, and updated, but you'll also get the finalized, DRM-free product to download in any format you want when it's published. As a member of Packt, you'll also be eligible for our exclusive offers, including a free course every day, and discounts on new and popular titles.