Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Scalable Data Architecture with Java
Scalable Data Architecture with Java

Scalable Data Architecture with Java: Build efficient enterprise-grade data architecting solutions using Java

eBook
€19.99 €28.99
Paperback
€35.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Table of content icon View table of contents Preview book icon Preview Book

Scalable Data Architecture with Java

Basics of Modern Data Architecture

With the advent of the 21st century, due to more and more internet usage and more powerful data insight tools and technologies emerging, there has been a data explosion, and data has become the new gold. This has implied an increased demand for useful and actionable data, as well as the need for quality data engineering solutions. However, architecting and building scalable, reliable, and secure data engineering solutions is often complicated and challenging.

A poorly architected solution often fails to meet the needs of the business. Either the data quality is poor, it fails to meet the SLAs, or it’s not sustainable or scalable as the data grows in production. To help data engineers and architects build better solutions, every year, dozens of open source and preoperatory tools get released. Even a well-designed solution sometimes fails because of a poor choice or implementation of the tools.

This book discusses various architectural patterns, tools, and technologies with step-by-step hands-on explanations to help an architect choose the most suitable solution and technology stack to solve a data engineering problem. Specifically, it focuses on tips and tricks to make architectural decisions easier. It also covers other essential skills that a data architect requires such as data governance, data security, performance engineering, and effective architectural presentation to customers or upper management.

In this chapter, we will explore the landscape of data engineering and the basic features of data in modern business ecosystems. We will cover various categories of modern data engineering problems that a data architect tries to solve. Then, we will learn about the roles and responsibilities of a Java data architect. We will also discuss the challenges that a data architect faces while designing a data engineering solution. Finally, we will provide an overview of the techniques and tools that we’ll discuss in this book and how they will help an aspiring data architect do their job more efficiently and be more productive.

In this chapter, we’re going to cover the following main topics:

  • Exploring the landscape of data engineering
  • Responsibilities and challenges of a Java data architect
  • Techniques to mitigate those challenges

Exploring the landscape of data engineering

In this section, you will learn what data engineering is and why it is needed. You will also learn about the various categories of data engineering problems and some real-world scenarios where they are found. It is important to understand the varied nature of data engineering problems before you learn how to architect solutions for such real-world problems.

What is data engineering?

By definition, data engineering is the branch of software engineering that specializes in collecting, analyzing, transforming, and storing data in a usable and actionable form.

With the growth of social platforms, search engines, and online marketplaces, there has been an exponential increase in the rate of data generation. In 2020 alone, around 2,500 petabytes of data was generated by humans each day. It is estimated that this figure will go up to 468 exabytes per day by 2025. The high volume and availability of data have enabled rapid technological development in AI and data analytics. This has led businesses, corporations, and governments to gather insights like never before to give customers a better experience of their services.

However, raw data usually is seldom used. As a result, there is an increased demand for creating usable data, which is secure and reliable. Data engineering revolves around creating scalable solutions to collect the raw data and then analyze, validate, transform, and store it in a usable and actionable format. Optionally, in certain scenarios and organizations, in modern data engineering, businesses expect usable and actionable data to be published as a service.

Before we dive deeper, let’s explore a few practical use cases of data engineering:

  • Use case 1: American Express (Amex) is a leading credit card provider, but it has a requirement to group customers with similar spending behavior together. This ensures that Amex can generate personalized offers and discounts for targeted customers. To do this, Amex needs to run a clustering algorithm on the data. However, the data is collected from various sources. A few data flows from MobileApp, a few flows from different Salesforce organizations such as sales and marketing, and a few data flows from logs and JSON events will be required. This data is known as raw data, and it can contain junk characters, missing fields, special characters, and sometimes unstructured data such as log files. Here, the data engineering team ingests that data from different sources, cleans it, transforms it, and stores it in a usable structured format. This ensures that the application that performs clustering can run on clean and sorted data.
  • Use case 2: A health insurance provider receives data from multiple sources. This data comes from various consumer-facing applications, third-party vendors, Google Analytics, other marketing platforms, and mainframe batch jobs. However, the company wants a single data repository to be created that can serve different teams as the source of clean and sorted data. Such a requirement can be implemented with the help of data engineering.

Now that we understand data engineering, let’s look at a few of its basic concepts. We will start by looking at the dimensions of data.

Dimensions of data

Any discussion on data engineering is incomplete without talking about the dimensions of data. The dimensions of data are some basic characteristics by which the nature of data can be analyzed. The starting point of data engineering is analyzing and understanding the data.

To successfully analyze and build a data-oriented solution, the four Vs of modern data analysis are very important. These can be seen in the following diagram:

Figure 1.1 – Dimensions of data

Figure 1.1 – Dimensions of data

Let’s take a look at each of these Vs in detail:

  • Volume: This refers to the size of data. The size of the data can be as small as a few bytes to as big as a few hundred petabytes. Volume analysis usually involves understanding the size of the whole dataset or the size of a single data record or event. Understanding the size is essential in choosing the type of technologies and infrastructure sizing decisions to process and store the data.
  • Velocity: This refers to the speed at which data is getting generated. High-velocity data requires distributed processing. Analyzing the speed of data generation is especially critical for scenarios where businesses require usable data to be made available in real-time or near-real-time.
  • Variety: This refers to the various variations in the format in which the data source can generate the data. Usually, they can be one of the three following types:
    • Structured: Structured data is where the number of columns, their data types, and their positions are fixed. All classical datasets that fit neatly in the relational data model are perfect examples of structured data.
    • Unstructured: These datasets don’t conform to a specific structure. Each record in such a dataset can have any number of columns in any arbitrary format. Examples include audio and video files.
    • Semi-structured: Semi-structured data has a structure, but the order of the columns and the presence of a column in each record is optional. A classical example of such a dataset is any hierarchical data source, such as a .json or a .xml file.
  • Veracity: This refers to the trustworthiness of the data. In simple terms, it is related to the quality of the data. Analyzing the noise of data is as important as analyzing any other aspect of the data. This is because this analysis helps create a robust processing rule that ultimately determines how successful a data engineering solution is. Many well-engineered and designed data engineering solutions fail in production due to a lack of understanding about the quality and noise of the source data.

Now that we have a fair idea of the characteristics by which the nature of data can be analyzed, let’s understand how they play a vital role in different types of data engineering problems.

Types of data engineering problems

Broadly speaking, the kinds of problems that data engineers solve can be classified into two basic types:

  • Processing problems
  • Publishing problems

Let’s take a look at these problems in more detail.

Processing problems

The problems that are related to collecting raw data or events, processing them, and storing them in a usable or actionable data format are broadly categorized as processing problems. Typical use cases can be a data ingestion problem such as Extract, Transform, Load (ETL) or a data analytics problem such as generating a year-on-year report.

Again, processing problems can be divided into three major categories, as follows:

  • Batch processing
  • Real-time processing
  • Near real-time processing

This can be seen in the following diagram:

Figure 1.2 – Categories of processing problems

Figure 1.2 – Categories of processing problems

Let’s take a look at each one of these categories in detail.

Batch processing

If the SLA of processing is more than 1 hour (for example, if the processing needs to be done once in 2 hours, once daily, once weekly, or once biweekly), then such a problem is called a batch processing problem. This is because, when a system processes data at a longer time interval, it usually processes a batch of data records and not a single record/event. Hence, such processing is called batch processing:

Figure 1.3 – Batch processing problem

Figure 1.3 – Batch processing problem

Usually, a batch processing solution depends on the volume of data. If the data volume is more than tens of terabytes, usually, it needs to be processed as big data. Also, since big data processes are schedule-driven, a workflow manager or schedular needs to run its jobs. We will discuss batch processing in more detail later in this book.

Real-time processing

A real-time processing problem is a use case where raw data/events are to be processed on the fly and the response or the processing outcome should be available within seconds, or at most within 2 to 5 minutes.

As shown in the following diagram, a real-time process receives data in the form of an event stream and immediately processes it. Then, it either sends the processed event to a sink or to another stream of events to be processed further. Since this kind of processing happens on a stream of events, this is known as real-time stream processing:

Figure 1.4 – Real-time stream processing

Figure 1.4 – Real-time stream processing

As shown in Figure 1.4, event E0 gets processed and sent out by the streaming application, while events E1, E2 and E3 are waiting to be processed in the queue. At t1, event E1 also gets processed, showing continuous processing of events by streaming application

An event can generate at any time (24/7), which creates a new kind of problem. If the producer application of an event directly sends the event to a consumer, there is a chance of event loss, unless the consumer application is running 24/7. Even bringing down the consumer application for maintenance or upgrades isn’t possible, which means there should be zero downtime for the consumer application. However, any application with zero downtime is not realistic. Such a model of communication between applications is called point-to-point communication.

Another challenge in point-to-point communication for real-time problems is the speed of processing as this should be always equal to or greater than that of a producer. Otherwise, there will be a loss of events or a possible memory overrun of the consumer. So, instead of directly sending events to the consumer application, they are sent asynchronously to an Event Bus or a Message Bus. An Event Bus is a high availability container that can hold events such as a queue or a topic. This pattern of sending and receiving data asynchronously by introducing a high availability Event Bus in between is called the Pub-Sub framework.

The following are some important terms related to real-time processing problems:

  • Events: This can be defined as a data packet generated as a result of an action, a trigger, or an occurrence. They are also popularly known as messages in the Pub-Sub framework.
  • Producer: A system or application that produces and sends events to a Message Bus is called a publisher or a producer.
  • Consumer: A system or application that consumes events from a Message Bus to process is called a consumer or a subscriber.
  • Queue: This has a single producer and a single consumer. Once a message/event is consumed by a consumer, that event is removed from the queue. As an analogy, it’s like an SMS or an email sent to you by one of your friends.
  • Topic: Unlike a queue, a topic can have multiple consumers and producers. It’s a broadcasting channel. As an analogy, it’s like a TV channel such as HBO, where multiple producers are hosting their show, and if you have subscribed to that channel, you will be able to watch any of those shows.

A real-world example of a real-time problem is credit card fraud detection, where you might have experienced an automated confirmation call to verify the authenticity of a transaction from your bank, if any transaction seems suspicious while being executed.

Near-real-time processing

Near-real-time processing, as its name suggests, is a problem whose response or processing time doesn’t need to be as fast as real time but should be less than 1 hour. One of the features of near-real-time processing is that it processes events in micro batches. For example, a near-real-time process may process data in a batch interval of every 5 minutes, a batch size of every 100 records, or a combination of both (whichever condition is satisfied first).

At time tx, all events (E1, E2 and E3) that are generated between t0 and tx are processed together by near real-time processing job. Similarly all events (E4, E5 and E6) between time tx and tn are processed together.

Figure 1.5 – Near-real-time processing

Figure 1.5 – Near-real-time processing

Typical near-real-time use cases are recommendation problems such as product recommendations for services such as Amazon or video recommendations for services such as YouTube and Netflix.

Publishing problems

Publishing problems deal with publishing the processed data to different businesses and teams so that data is easily available with proper security and data governance. Since the main goal of the publishing problem is to expose the data to a downstream system or an external application, having extremely robust data security and governance is essential.

Usually, in modern data architectures, data is published in one of three ways:

  • Sorted data repositories
  • Web services
  • Visualizations

Let’s take a closer look at each.

Sorted data repositories

Sorted data repositories is a common term used for various kinds of repositories that are used to store processed data. This is usable and actionable data and can be directly queried by businesses, analytics teams, and other downstream applications for their use cases. They are broadly divided into three types:

  • Data warehouse
  • Data lake
  • Data hub

A data warehouse is a central repository of integrated and structured data that’s mainly used for reporting, data analysis, and Business Intelligence (BI). A data lake consists of structured and unstructured data, which is mainly used for data preparation, reporting, advanced analytics, data science, and Machine Learning (ML). A data hub is the central repository of trusted, governed, and shared data, which enables seamless data sharing between diverse endpoints and connects business applications to analytic structures such as data warehouses and data lakes.

Web services

Another publishing pattern is where data is published as a service, popularly known as Data as a Service. This data publishing pattern has many advantages as it enables security, immutability, and governance by design. Nowadays, as cloud technologies and GraphQL are becoming popular, Data-as-a-Service is getting a lot of traction in the industry.

The two popular mechanisms of publishing Data as a Service are as follows:

  • REST
  • GraphQL

We will discuss these techniques in detail later in this book.

Visualization

There’s a popular saying: A picture is worth a thousand words. Visualization is a technique by which reports, analytics, and statistics about the data are captured visually in graphs and charts.

Visualization is helpful for businesses and leadership to understand, analyze, and get an overview of the data flowing in their business. This helps a lot in decision-making and business planning.

A few of the most common and popular visualization tools are as follows:

  • Tableau is a proprietary data visualization tool. This tool comes with multiple source connectors to import data into it and create easy fast visualization using drag-and-drop visualization components such as graphs and charts. You can find out more about this product at https://www.tableau.com/.
  • Microsoft Power BI is a proprietary tool from Microsoft that allows you to collect data from various data sources to connect and create powerful dashboards and visualizations for BI. While both Tableau and Power BI offer data visualization and BI, Tableau is more suited for seasoned data analysts, while Power BI is useful for non-technical or inexperienced users. Also, Tableau works better with huge volumes of data compared to Power BI. You can find out more about this product at https://powerbi.microsoft.com/.
  • Elasticsearch-Kibana is an open source tool whose source code is open source and has free versions for on-premise installations and paid subscriptions for cloud installation. This tool helps you ingest data from any data source into Elasticsearch and create visualizations and dashboards using Kibana. Elasticsearch is a powerful text-based Lucene search engine that not only stores the data but enables various kinds of data aggregation and analysis (including ML analysis). Kibana is a dashboarding tool that works together with Elasticsearch to create very powerful and useful visualizations. You can find out more about these products at https://www.elastic.co/elastic-stack/.

Important note

A Lucene index is a full-text inverse index. This index is extremely powerful and fast for text-based searches and is the core indexing technology behind most search engines. A Lucene index takes all the documents, splits them into words or tokens, and then creates an index for each word.

  • Apache Superset is a completely open source data visualization tool (developed by Airbnb). It is a powerful dashboarding tool and is completely free, but its data source connector support is limited, mostly to SQL databases. A few interesting features are its built-in role-based data access, an API for customization, and extendibility to support new visualization plugins. You can find out more about this product at https://superset.apache.org/.

While we have briefly discussed a few of the visualization tools available in the market, there are many visualizations and competitive alternatives available. Discussing data visualization in more depth is beyond the scope of this book.

So far, we have provided an overview of data engineering and the various types of data engineering problems. In the next section, we will explore what role a Java data architect plays in the data engineering landscape.

Responsibilities and challenges of a Java data architect

Data architects are senior technical leaders who map business requirements to technical requirements, envision technical solutions to solve business problems, and establish data standards and principles. Data architects play a unique role, where they understand both the business and technology. They are like the Janus of business and technology, where on one hand they can look, understand, and communicate with the business, and on the other, they do the same with technology. Data architects create processes that are used to plan, specify, enable, create, acquire, maintain, use, archive, retrieve, control, and purge data. According to DAMMA’s data management body of knowledge, a data architect provides a standard common business vocabulary, expresses strategic requirements, outlines high-level integrated designs to meet those requirements, and aligns with the enterprise strategy and related business architecture.

The following diagram shows the cross-cutting concerns that a data architect handles:

Figure 1.6 – Cross-cutting concerns of a data architect

Figure 1.6 – Cross-cutting concerns of a data architect

The typical responsibilities of a Java data architect are as follows:

  • Interpreting business requirements into technical specifications, which includes data storage and integration patterns, databases, platforms, streams, transformations, and the technology stack
  • Establishing the architectural framework, standards, and principles
  • Developing and designing reference architectures that are used as patterns that can be followed by others to create and improve data systems
  • Defining data flows and their governance principles
  • Recommending the most suitable solutions, along with their technology stacks, while considering scalability, performance, resource availability, and cost
  • Coordinating and collaborating with multiple departments, stakeholders, partners, and external vendors

In the real world, a data architect is supposed to play a combination of three disparate roles, as shown in the following diagram:

Figure 1.7 – Multifaced role of a data architect

Figure 1.7 – Multifaced role of a data architect

Let’s look at these three architectural roles in more detail:

  • Data architectural gatekeeper: An architectural gatekeeper is a person or a role that ensures the data model is following the necessary standards and that the architecture is following the proper architectural principles. They look for any gaps in terms of the solution or business expectations. Here, a data architect takes a negative role in finding faults or gaps in the product or solution design and delivery (including a lack of or any gap in best practices in the data model, architecture, implementation techniques, testing procedures, continuous integration/continuous delivery (CI/CD) efforts, or business expectations).
  • Data advisor: A data advisor is a data architect that focuses more on finding solutions rather than finding a problem. A data advisor highlights issues, but more importantly, they show an opportunity or propose a solution for them. A data advisor should understand the technical as well as the business aspect of a problem and solution and should be able to advise to improve the solution.
  • Business executive: Apart from the technical roles that a data architect plays, the data architect needs to play an executive role as well. As stated earlier, the data architect is like the Janus of business and technology, so they are expected to be a great communicator and sales executive who can sell their idea or solution (that is technical) to nontechnical folks. Often, a data architect needs to present elevator speeches to higher leadership to show opportunities and convince them of a solution for business problems. To be successful in this role, a data architect must think like a business executive – What is the ROI? Or what is there for me in it? How much can we save in terms of time and money with this solution or opportunity? Also, a data architect should be concise and articulate in presenting their idea so that it creates immediate interest among the listeners (mostly business executives, clients, or investors).

Let’s understand the difference between a data architect and data engineer.

Data architect versus data engineer

The data architect and data engineer are related roles. A data architect visualizes, conceptualizes, and creates the blueprint of the data engineering solution and framework, while the data engineer takes the blueprint and implements the solution.

Data architects are responsible for putting data chaos in order, generated by enormous piles of business data. Each data analytics or data science team requires a data architect who can visualize and design the data framework to create clean, analyzed, managed, formatted, and secure data. This framework can be utilized further by data engineers, data analysts, and data scientists for their work.

Challenges of a data architect

Data architects face a lot of challenges in their day-to-day work. We will be focusing on the main challenges that a data architect faces on a day-to-day basis:

  • Choosing the right architectural pattern
  • Choosing the best-fit technology stack
  • Lack of actionable data governance
  • Recommending and communicating effectively to leadership

Let’s take a closer look.

Choosing the right architectural pattern

A single data engineering problem can be solved in many ways. However, with the ever-evolving expectations of customers and the evolution of new technologies, choosing the correct architectural pattern has become more challenging. What is more interesting is that with the changing technological landscape, the need for agility and extensibility in architecture has increased many folds to avoid unnecessary costs and sustainability of architecture over time.

Choosing the best-fit technology stack

One of the complex problems that a data architect needs to figure out is the technology stack. Even when you have created a very well-architected solution, whether your solution will fly or flop will depend on the technology stack you are choosing and how you are planning to use it. As more and more tools, technologies, databases, and frameworks are developed, a big challenge remains for data architects to choose an optimum tech stack that can help create a scalable, reliable, and robust solution. Often, a data architect needs to take into account other non-technical factors as well, such as the future growth prediction of the tool, the market availability of skilled resources for those tools, vendor lock-in, cost, and community support options.

Lack of actionable data governance

Data governance is a buzzword in data businesses, but what does it mean? Governance is a broad area that includes both workflows and toolsets to govern data. If either the tools or the workflow process has limitations or is not present, then data governance is incomplete. When we talk about actionable governance, we mean the following elements:

  • Integrating data governance with all data engineering systems to maintain standard metadata, including traceability of events and logs for a standard timeline
  • Integrating data governance concerning all the security policies and standards
  • Role-based and user-based access management policies on all data elements and systems
  • Adherence to defined metrics that are tracked continually
  • Integrating data governance and the data architecture

Data governance should always be aligned with strategic and organizational goals.

Recommending and communicating effectively to leadership

Creating an optimal architecture and the correct set of tools is a challenging task, but it never is enough, unless and until they are not put into practice. One of the hats that a data architect often needs to wear is that of a sales executive who needs to sell their solution to the business executive or upper leadership. These are not usually technical people and they don’t have a lot of time. Data architects, most of whom have strong technical backgrounds, face the daunting task of communicating and selling their idea to these people. To convince them about the opportunity and the idea, a data architect needs to back them up with proper decision metrics and information that can align that opportunity to the broader business goals of the organization.

So far, we have seen the role of a data architect and the common problems that they face. In the next section, we will provide an overview of how a data architect mitigates those challenges on a day-to-day basis.

Techniques to mitigate those challenges

In this section, we will discuss how a data architect can mitigate the aforementioned challenges. To understand the mitigation plan, it is important to understand what the life cycle of a data architecture looks like and how a data architect contributes to it. The following diagram shows the life cycle of a data architecture:

Figure 1.8 – Life cycle of a data architecture

Figure 1.8 – Life cycle of a data architecture

The data architecture starts with defining the problem that the business is facing. Here, this is mainly identified or reported by business teams or customers. Then, the data architects work closely with the business to define the business requirements. However, in a data engineering landscape, that is not enough. In a lot of cases, there are hidden requirements or anomalies. To mitigate such problems, business analysts team up with data architects to analyze data and the current state of the system, including any existing solution, the current cost, or loss of revenue due to the problem and infrastructure where data resides. This helps refine the business requirements. Once the business requirements are more or less frozen, the data architects map the business requirements to the technical requirements.

Then, the data architect defines the standards and principles of the architecture and determines the priorities of the architecture based on the business need and budget. After that, the data architect creates the most suitable architectures, along with their proposed technology stack. In this phase, the data architects closely work with the data engineers to implement proof of concept (POCs) and evaluate the proposed solution in terms of feasibility, scalability, and performance.

Finally, the architects recommend solutions based on the evaluation results and architectural priorities defined earlier. The data architects present the proposed solutions to the business. Based on priorities such as cost, timeline, operational cost, and resource availability, feedback is received from the business and clients. It takes a few iterations to solidify and get an agreement on the architecture.

Once an agreement has been reached, the solution is implemented. Based on the implementation challenges and particular use cases, the architecture may or may not be revised or tweaked a little. Once an architecture is implemented and goes to production, it enters the maintenance and operations phase. During maintenance and operations, sometimes, feedback is provided, which might result in a few architectural improvements and changes, but they are often seldom if the solution is well-architected in the first place.

In the preceding diagram, the blue boxes indicate major involvement from a customer, a green box indicates major involvement from a data architect, a yellow box means a data architect equally shares involvement with another stakeholder, and a gray box means the data architect has the least involvement in that scenario.

Now that we have understood the life cycle of the data architecture and a data architect’s role in various phases, we will focus on how to mitigate those challenges that are faced by a data architect. This book covers how to mitigate those challenges in the following way:

  • Understanding the business data, its characteristics, and storage options:
    • Data and its characteristics were covered earlier in this chapter; it will also be covered partly in Chapter 2, Data Storage and Databases
    • Storage options will also be discussed in Chapter 2, Data Storage and Databases
  • Analyzing and defining the business problem:
    • Understanding the various kinds of data engineering problems (covered in this chapter)
    • We have provided a step-by-step analysis of how an architect should analyze a business problem, classify, and define it in Chapter 4, ETL Data Load – A Batch-Based Solution to Ingest Data in a Data Warehouse, Chapter 5, Architecting a Batch Processing Pipeline, and Chapter 6, Architecting a Real-Time Processing Pipeline
  • The challenge of choosing the right architecture. To choose the right architectural pattern, we should be aware of the following:
    • The types of data engineering problems and the dimensions of data (we discussed this in this chapter)
    • The different types of data and various data storage available (Chapter 2, Data Storage and Databases)
    • How to model and design different kinds of data while storing it in a database (Chapter 2, Data Storage and Databases)
    • Understanding various architectural patterns for data processing problems (Chapter 7, Core Architectural Design Patterns)
    • Understanding the architectural patterns of publishing the data (Section 3, Enabling Data as a Service)
  • The challenge of choosing the best-fit technology stack and data platform. To choose the correct set of tools, we need to know how to use a tool and when to use what tools we have:
    • How to choose the correct database will be discussed in Chapter 2, Data Storage and Databases
    • How to choose the correct platform will be discussed in Chapter 3, Identifying the Right Data Platform
    • A step-by-step hands-on guide to using different tools in batch processing will be covered in Chapter 4, ETL Data Load – A Batch-Based Solution to Ingest Data in a Data Warehouse, and Chapter 5, Architecting a Batch Processing Pipeline
    • A step-by-step guide to architecting real-time stream processing and choosing the correct tools will be covered in Chapter 6, Architecting a Real-Time Processing Pipeline
    • The different tools and technologies used in data publishing will be discussed in Chapter 9, Exposing MongoDB Data as a Service, and Chapter 10, Federated and Scalable DaaS with GraphQL
  • The challenge of creating a design for scalability and performance will be covered in Chapter 11, Measuring Performance and Benchmarking Your Applications. Here, we will discuss the following:
    • Performance engineering basics
    • The publishing performance benchmark
    • Performance optimization and tuning
  • The challenge of a lack of data governance. Various data governance and security principles and tools will be discussed in Chapter 8, Enabling Data Security and Governance.
  • The challenge of evaluating architectural solutions and recommending them to leadership. In the final chapter of this book (Chapter 12, Evaluating, Recommending, and Presenting Your Solution), we will use the various concepts that we have learned throughout this book to create actionable data metrics and determine the most optimized solution. Finally, we will discuss techniques that an architect can apply to effectively communicate with business stakeholders, executive leadership, and investors.

In this section, we discussed how this book can help an architect overcome the various challenges they will face and make them more effective in their role. Now, let’s summarize this chapter.

Summary

In this chapter, we learned what data engineering is and looked at a few practical examples of data engineering. Then, we covered the basics of data engineering, including the dimensions of data and the kinds of problems that are solved by data engineers. We also provided a high-level overview of various kinds of processing problems and publishing problems in a data engineering landscape. Then, we discussed the roles and responsibilities of a data architect and the kind of challenges they face. We also briefly covered the way this book will guide you to overcome challenges and dilemmas faced by a data architect and help you become a better Java data architect.

Now that you understand the basic landscape of data engineering and what this book will focus on, in the next chapter, we will walk through various data formats, data storage options, and databases and learn how to choose one for the problem at hand.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Learn how to adapt to the ever-evolving data architecture technology landscape
  • Understand how to choose the best suited technology, platform, and architecture to realize effective business value
  • Implement effective data security and governance principles

Description

Java architectural patterns and tools help architects to build reliable, scalable, and secure data engineering solutions that collect, manipulate, and publish data. This book will help you make the most of the architecting data solutions available with clear and actionable advice from an expert. You’ll start with an overview of data architecture, exploring responsibilities of a Java data architect, and learning about various data formats, data storage, databases, and data application platforms as well as how to choose them. Next, you’ll understand how to architect a batch and real-time data processing pipeline. You’ll also get to grips with the various Java data processing patterns, before progressing to data security and governance. The later chapters will show you how to publish Data as a Service and how you can architect it. Finally, you’ll focus on how to evaluate and recommend an architecture by developing performance benchmarks, estimations, and various decision metrics. By the end of this book, you’ll be able to successfully orchestrate data architecture solutions using Java and related technologies as well as to evaluate and present the most suitable solution to your clients.

Who is this book for?

Data architects, aspiring data architects, Java developers and anyone who wants to develop or optimize scalable data architecture solutions using Java will find this book useful. A basic understanding of data architecture and Java programming is required to get the best from this book.

What you will learn

  • Analyze and use the best data architecture patterns for problems
  • Understand when and how to choose Java tools for a data architecture
  • Build batch and real-time data engineering solutions using Java
  • Discover how to apply security and governance to a solution
  • Measure performance, publish benchmarks, and optimize solutions
  • Evaluate, choose, and present the best architectural alternatives
  • Understand how to publish Data as a Service using GraphQL and a REST API
Estimated delivery fee Deliver to Czechia

Premium delivery 7 - 10 business days

€25.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Sep 30, 2022
Length: 382 pages
Edition : 1st
Language : English
ISBN-13 : 9781801073080
Category :
Languages :
Concepts :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
OR
Modal Close icon
Payment Processing...
tick Completed

Shipping Address

Billing Address

Shipping Methods
Estimated delivery fee Deliver to Czechia

Premium delivery 7 - 10 business days

€25.95
(Includes tracking information)

Product Details

Publication date : Sep 30, 2022
Length: 382 pages
Edition : 1st
Language : English
ISBN-13 : 9781801073080
Category :
Languages :
Concepts :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 109.97
Domain-Driven Design with Java - A Practitioner's Guide
€31.99
Scalable Data Architecture with Java
€35.99
Hands-On Software Architecture with Java
€41.99
Total 109.97 Stars icon
Banner background image

Table of Contents

18 Chapters
Section 1 – Foundation of Data Systems Chevron down icon Chevron up icon
Chapter 1: Basics of Modern Data Architecture Chevron down icon Chevron up icon
Chapter 2: Data Storage and Databases Chevron down icon Chevron up icon
Chapter 3: Identifying the Right Data Platform Chevron down icon Chevron up icon
Section 2 – Building Data Processing Pipelines Chevron down icon Chevron up icon
Chapter 4: ETL Data Load – A Batch-Based Solution to Ingesting Data in a Data Warehouse Chevron down icon Chevron up icon
Chapter 5: Architecting a Batch Processing Pipeline Chevron down icon Chevron up icon
Chapter 6: Architecting a Real-Time Processing Pipeline Chevron down icon Chevron up icon
Chapter 7: Core Architectural Design Patterns Chevron down icon Chevron up icon
Chapter 8: Enabling Data Security and Governance Chevron down icon Chevron up icon
Section 3 – Enabling Data as a Service Chevron down icon Chevron up icon
Chapter 9: Exposing MongoDB Data as a Service Chevron down icon Chevron up icon
Chapter 10: Federated and Scalable DaaS with GraphQL Chevron down icon Chevron up icon
Section 4 – Choosing Suitable Data Architecture Chevron down icon Chevron up icon
Chapter 11: Measuring Performance and Benchmarking Your Applications Chevron down icon Chevron up icon
Chapter 12: Evaluating, Recommending, and Presenting Your Solutions Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.7
(6 Ratings)
5 star 66.7%
4 star 33.3%
3 star 0%
2 star 0%
1 star 0%
Filter icon Filter
Top Reviews

Filter reviews by




JJ Jan 25, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I found that the Author has a clear flow through the data engineering concepts, step by step adaptation and DaaS layer. It has covered the details we will need during implementation clearly, instead of a bird's eye view like most books. I would recommend for my colleagues
Amazon Verified review Amazon
Amazon Customer Nov 29, 2022
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I thoroughly enjoyed reading this book. This is one of the best technical book on how to Architect the right Batch or Real time Data Pipelines. It help us to understand the core design Patterns and DAAS implementations . Had an opportunity of closely collaborating with Sinchan(Author) on multiple projects in the past few years, With confidence of the content published with respect to data scalability, optimization, security and performance strategies mentioned in the book, this will definitely assist us in all our existing and upcoming successful implementations/deliverables. This is an essential book for all the aspiring Data Architects.
Amazon Verified review Amazon
Geraldo Netto Jan 25, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
First things first, it goes from the ground up even the big picture of current data architecture/engineering and its different roles.Then, it moves to the most fundamental concepts on data and databases like data types and formats, a bit of storage, the classical data warehouse, a bit of relational and nosql including its modelingThen, we have a bit of cloud/hadoopAt this point, we reach the data pipeline going from classical ETL, batch and stream processing, including patterns for it - plain beauty, really!!!Data security and governance - certainly one of the areas that people most neglect, unfortunatelyThe third part is about data as a service discussing MongoDB, Spring boot, GraphQL with use casesAnd by the very end of the book, we still have guidelines for performance tuning and how to present/'sell' your solutionSo, the book simply covers the process end-to-endSome highlights:- it covers the whole data workflow- it's use case based- software/tools and patterns are widely deployed meaning that data architect job will actually touch one (if not all) of the software mentioned through the book- a derivation of above, it's easier to get help with popular software/tools than 'esoteric' or very new toolsOn the other side (maybe for a second edition :))- maybe discuss a bit more about parsing data specially the trick ones like multi-line logs- on performance tuning would be interesting to have more details on other GC algos like ZGC which is the default GC algo for JDK 15 and promises to deliver a fairly decent performance under millisecond for really large heaps- while the discussion on lambda pattern is brilliant, it would be interesting to discuss a bit more about the CAP theorem and PACELC theorem (an extension of CAP)Overall, a great introduction for data architecture on java
Amazon Verified review Amazon
Steven Fernandes Jan 06, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Section 1 of the book starts with a short introduction to data engineering, the basic concepts of data engineering, and the role a Java data architect plays in data engineering. It then briefly discusses various data types, storage formats, data formats, and databases. It also discusses when to use them.It also explains how to identify the Right Data Platform, provides an overview of various platforms to deploy data pipelines, and how to choose the correct platform.Section 2 of the book gives an overview of ETL data load. It discusses how to approach, analyze, and architect an effective solution for a batch-based data ingestion problem using Spring Batch and Java. Next, it focuses on architecting a batch processing pipeline and discusses how to architect and implement a data analysis pipeline in AWS using S3, Apache Spark (Java), AWS Elastic MapReduce (EMR), and AWS Athena for a big data usecase. The book also provides a step-by-step guide to building a real-time streaming solution to predict the risk category of a loan application using Java, Kafka, andrelated technologies. It discusses common architectural patterns used to solve data engineering problems and when to use them. Section 2 ends with data governance and outlines how to apply it using a practical use case. It also briefly touches upon the topic of data security.Section 3 provides a step-by-step guide on building Data as a Service to expose MongoDB data using a REST API. It also discusses GraphQL, various GraphQL patterns, and how to publish data using GraphQL. The final section, 4, provides an overview of performance engineering, how to measure performance and create benchmarks, and how to optimize performance. It discusses evaluating and choosing the best-suited alternative among various architectures and how to present the recommended architecture effectively.
Amazon Verified review Amazon
POE Feb 09, 2023
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
This is a well-written Java book that focuses on a disparate set of topics. These topics include design patterns, data architecture (storage, databases, etc.), data processing pipelines, data security, benchmarks, data-as-a-service, and more. The book is written for seasoned Java developers. Technologies featured in the book include various AWS products, GraphQL. Kafka, Hadoop, and REST API. This book is a good edition to your Java library if you want to strengthen your data architecture knowledge with Java.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact [email protected] with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at [email protected] using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on [email protected] with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on [email protected] within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on [email protected] who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on [email protected] within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela