Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Developing Modern Database Applications with PostgreSQL
Developing Modern Database Applications with PostgreSQL

Developing Modern Database Applications with PostgreSQL: Use the highly available and object-relational PostgreSQL to build scalable and reliable apps

eBook
$27.98 $39.99
Paperback
$48.99
Subscription
Free Trial
Renews at $19.99p/m

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing
Table of content icon View table of contents Preview book icon Preview Book

Developing Modern Database Applications with PostgreSQL

Introduction to PostgreSQL Development and Administration

PostgreSQL is an object-relational database management system (ORDBMS) based on the INGRES (INteractive Graphics REtrieval System) package, which was developed at the University of California, Berkeley. The POSTGRES (Post Ingres) project started in 1985, and version 1 was released to a small number of external users in June of 1989. Now, with more than 20 years of development, PostgreSQL has become the most advanced open source database, available all over the world.

This chapter introduces the development of PostgreSQL and how PostgreSQL has become a popular Database as a Service (DBaaS) among the current clouds. We will present an overview of the numerous features of PostgreSQL in various development environments, such as NodeJS and Django. Moving forward, we will introduce you to the PostGIS extension, which is a PostgreSQL facility for a geospatial PostgreSQL database. We will also present the PostgREST standalone web server, which aims to do one thing well: add an HTTP interface to any PostgreSQL databases or RESTful APIs. 

Then, in the second part of the chapter, we will learn about the administration of PostgreSQL. We will utilize DevOps through the setting up of PostgreSQL high availability (HA) clusters. We will also set up New Relic to monitor a PostgreSQL database, carry out performance tests on a PostgreSQL database with PGBench and JMeter, and use PostgreSQL testing frameworks.

In this chapter, we will cover the following main topics:

  • An overview of PostgreSQL development
  • An overview of PostgreSQL administration

An overview of PostgreSQL development

In 1994, Postgres95 was released to the world by Andrew Yu and Jolly Chen as an open source descendant of the original POSTGRES Berkeley code; they added a SQL language interpreter to POSTGRES. By 1996, the name "Postgres95" was changed to a new name PostgreSQL Version 6.0  combining the original POSTGRES version and recent versions with SQL capability. Recently, DB-Engines and the SD Times 2018 100 have featured PostgreSQL as the "DBMS of the Year 2017." 

In October 2019, the first PostgreSQL 12.0 version was released by the PostgreSQL Global Development Group. Since then, PostgreSQL has been the most advanced open source database all over the world. Version 12 provides many important improvements, including the following:

  • The increased performance of standard B-tree indexes and also the reduction of the index size for B-tree indexes.
  • The ability to rebuild indexes concurrently so that a REINDEX operation will not block any index writes; the parallel indexes introduced from PostgreSQL 10 now get more benefits with the new REINDEX CONCURRENTLY statement.
  • The efficient capabilities regarding partitioning performance so that developers can now process (that is, query or alter) thousands of partitions simultaneously without blocking, and they can use foreign keys to reference partitioned tables.
  • The most common value statistics for the CREATE STATISTICS command, leading to improved query plans.
  • The common table expressions using WITH queries can now be inlined for quicker queries.
  • The INCLUDE clause for generalized search tree (GiST) indexes is an extensible data structure that allows you to develop indices over any kind of data.
  • The reduction of write-ahead log (WAL) overheads generated from a GiST, GIN, or SP-GiST index.
  • The checksum control ability via the pg checksums statement (which used to be pg_verify_checksums) so that developers can enable or disable an offline cluster without dumping and reloading data (note that online checksum enablement is still in progress and not yet available in PostgreSQL 12).

Since October 2019, the latest PostgreSQL version 12 has been released with more improvements to the performance of the INSERT and COPY statements for partitioned tables and the attachment of a new table partition without blocking queries. You can read more about the fixes to previous releases at https://www.postgresql.org/docs/12/release-12.html.

Finally, PostgreSQL 12 benefits all users with notable improvements to query performance over larger datasets and space utilization; PostgreSQL 12 has quickly received good reviews and evaluations across the database industry. Version 12 of PostgreSQL is obviously the preferred open source database for all developers.

PostgreSQL 12 is now developed on clouds or so-called cloud databases and DBaaS.

What is DBaaS?

DBaaS, which is sometimes referred to as a cloud database, provides many kinds of databases as a managed service. DBaaS works in the same way as Infrastructure as a Service (IaaS) or Platform as a Service (PaaS). IaaS provides infrastructure components and PaaS provides development platforms as managed services in the cloud. In fact, the offerings of IaaS and PaaS often include databases.

When customers demand a DBaaS offering in the cloud, they only pay for what they use on a monthly or annual basis. They do not have to pay for what they do not use. The cloud providers are responsible for managing database services such as maintenance, upgrades, or administration for their customers. At present, the DBaaS offerings include both relational database management systems (RDBMS) and NoSQL databases.

The primary advantage of the cloud service model is that customers do not have to install or maintain their software in the data center; it is well understood by every developer. However, there are also some disadvantages, such as a lack of control over network performance issues or the inability to compress data or other database maintenance tasks.

PostgreSQL can be delivered as DBaaS on many clouds, such as Amazon Web Services (AWS), Google Cloud SQL, Microsoft Azure, Heroku, and EnterpriseDB Cloud.

The development of PostgreSQL by various environments

Most applications, at some point, need to persist data. This can be through files, local storage, cloud services, or often databases. Relational database systems are usually a good default choice, particularly PostgreSQL, which is a very powerful open source SQL server. 

Some companies have resources to organize their own dedicated database team. If you are lucky enough to work in such a company, they will probably craft all of their stored procedures inside their databases. So, you will only have to use the language of your choice  NodeJS or Python – to call these stored procedures. In practice, this might not occur very often, and the truth is that many developers, or even so-called full stack developers, know very little about SQL and will seek out various abstractions in their favorite package repository.

Features of PostgreSQL with NodeJS

As a newfangled piece of technology, NodeJS is a cutting-edge web server that generates vast attention, not only from start-ups but also from giant enterprises. NodeJS is considered an exceptional framework for the IT market because of the following factors:

  • NodeJS is an open source cross-platform that is built on the V8 JavaScript runtime of Chrome.
  • NodeJS is ideal for data-intensive, real-time applications, and it enables event-driven programming.
  • NodeJS is also a super-fast JavaScript interpreter because it works on a non-blocking I/O model.
  • Outperforming conventional web servers, NodeJS has been a viable option for Yahoo, eBay, and Walmart.
  • NodeJS employs push technology.
  • It is also a lightweight web server in terms of memory usage.
  • NodeJS can be a good solution for data-dense, real-time web applications across multiple devices.

The following features of NodeJS are usually emphasized by developers:

  • The super-fast web server: NodeJS operates at a very high speed by using a single-threaded event loop model to process all of its asynchronous I/O operations. Therefore, any major actions can be performed quickly with NodeJS, such as network connections, filesystems, and reading/writing to databases. NodeJS supports developers by allowing them to create quick and robust network applications and offers parallel connections to increase throughput.
  • The JavaScript web server: The NodeJS suite is a JavaScript runtime environment, and developers can now write JavaScript not only in the browser but also on the server. When the browser code and the server code are created in a similar manner, it is convenient to transport data between the server and the client. NodeJS fills the gap between the frontend and backend skills, and full stack developers can use JavaScript for both. The fact that all NodeJS programs are made using JavaScript increases the effectiveness of the web development process.
  • Real-time data streamingNodeJS considers both HTTP requests and responses as data streams. Hence, when data comes in the form of streams, the overall processing time can be reduced because NodeJS can process files and upload files at the same time. Developers can choose NodeJS for real-time video or audio recording.
  • Real-time web applications: Because NodeJS is very fast, it is obviously a winner for games and chat apps as well as any other multi-user real-time web apps. The synchronization process is fast and orderly due to the event-driven architecture of NodeJS, and the event loop of NodeJS through the web socket protocol handles the multi-user function.
  • Node Package Manager (NPM)NodeJS is an open source suite with more than 60,000 modules in the NPM.
  • A good solution for synchronization: NodeJS is very efficient in its ability to solve common network development problems because it manages asynchronous I/O very well with its single-threaded event loop. In order to handle many clients, all I/O tasks are undertaken together in NodeJS.

Features of PostgreSQL with Python

Many Python developers prefer to use PostgreSQL for their applications. Therefore, Python's sample code for using PostgreSQL has been better documented with typical usage scenarios. Both PostgreSQL and Python are popular because of the following:

  • PostgreSQL's open source license is free, so developers can easily operate as many databases as they wish without any cost.
  • We can now find developers who have PostgreSQL experience more easily than other relational databases.
  • Django is a Python Model-View-Template (MVT) web framework, which defines itself as a "batteries included" web framework.
  • Django is simple and robust, and it is one of the most famous frameworks used by websites such as Instagram, YouTube, Google, and NASA.

The following features are emphasized by developers of Python (Django):

  • PostgreSQL's connection with Python: Python developers connect to PostgreSQL databases via the psycopg2 database driver and then use an object-relational mapper (ORM) to turn PostgreSQL tables into objects; finally, these objects can be utilized in their Python applications.
  • A "batteries included" philosophy: The philosophy of "batteries included" means that Python provides plenty of functionalities, including magical ORM, multisite and multi-language support, MVT layouts, RSS and Atom feeds, AJAX support, free APIs, URL routing, easy database migrations, session handling, HTTP libraries and templating libraries, code layouts, and a default admin section.
  • Good tutorials: Django has very good documentation that developers can quickly update with the latest information, such as technical requirements and quick-start details, detailed release notes, backward-incompatible changes, and online topics on Python development.
  • The Django admin interface: The admin interface is one of the key advantages of Django. From a few simple lines of Python code, developers can get a good featured admin interface. 
  • Built-in templates: One of the advantages of Django is its robust built-in template system that facilitates the process of application development.
  • Good scalability: Django can handle heavy traffic and the mobile app usage of many users; it maximizes scalability while minimizing web hosting costs. Django can also execute fine for a large number of hosts while maintaining a relatively cheap or even free hosting price.
  • Best web application securityDjango has protection against SQL injections, XSS and CSRF attacks, and clickjacking, as it hides the source code of websites.

PostGIS spatial extension

PostGIS is a spatial extender for PostgreSQL. It can be a good OpenGIS Simple Features for SQL-compliant spatial database because it allows you to use location SQL queries for geographic objects while remaining free and open source. Because spatial data is usually related to various types of data, PostGIS allows PostgreSQL developers to encode more complex spatial relationships. The first version (0.1) of PostGIS was released in May 2001 by Refractions Research. The latest released version of PostGIS is PostGIS 3.1.2.

When using PostGIS, we are able to execute our spatial data just like anything else in SQL statements. With the power of SQL, developers can conveniently perform spatial database transactions, backups, integrity checks, less data redundancy, multi-user operations, and security controls. 

The following is a list of advantages of PostGIS:

  • It offers complicated spatial tasks, spatial operators, and spatial functions.
  • It significantly shortens the development time of applications.
  • It allows spatial SQL querying using simple expressions for spatial relationships, such as the distance, adjacency, and containment, and for spatial operations, such as the area, length, intersection, union, and buffer.

In 2006, the Open Geospatial Consortium evaluated that "PostGIS implements the specified standard for simple features for SQL." In fact, PostGIS can be used as a backend for many software systems such as OpenStreetMap, ArcGIS, OpenJUMP, MapGuide, Kosmo, and QGIS. Furthermore, PostGIS also has an open source extension named "pgRouting" that provides geospatial routing functionality with many algorithms, including the all-pairs algorithm, Johnson’s algorithm, the Floyd-Warshall algorithm, A*, the bidirectional Dijkstra and traveling salesperson algorithms, and more.

The PostgREST RESTful API for PostgreSQL databases

PostgREST is a web server created by Joe Nelson that effectively turns any PostgreSQL database into an API. This tool manages client authentication through JSON Web Tokens (JWTs). It also securely embeds a database role name in the JWT and uses that database role for all connections. The latest PostgREST release is version 7.0.1, and one of its earliest versions was PostgREST 0.2.6 from February 18, 2015. 

PostgREST is built on three core concepts: 

  • JSON encoding
  • Authentication by JWTs 
  • Resource embedding

Taking advantage of the very fast PostgreSQL JSON encoder for a large number of responses, PostgREST is also a very fast RESTful API. At first, when PostgREST connects to a database, it connects using an authenticator role that only has login capability. Then, whenever an authenticated request is sent to PostgREST that contains a JWT token, the token will be decoded using the secret key to set up a database role for the request. Using the &select= parameter, PostgREST can also query the related tables for us through defined foreign keys that have been declared inside the PostgreSQL database.

PostgREST's philosophy is that it aims to do one thing well: add an HTTP interface to any PostgreSQL database. 

An overview of PostgreSQL administration

The daily tasks of any PostgreSQL database administrator can include the optimization of parameter values defined by the PostgreSQL architecture, setting up and maintaining PostgreSQL HA clusters, and creating dashboards to monitor their PostgreSQL database clusters. They can also get requests to proceed with performance tests a few times per month if their PostgreSQL databases frequently encounter heavy traffic loads.

The PostgreSQL architecture

The PostgreSQL architecture is very simple. It consists of shared memory and a few background processes and data files, as shown in the following diagram:

  

Figure 1.1 – The PostgreSQL architecture

For a better understanding, let's discuss the preceding architecture in the following sections. 

Shared memory

Shared memory refers to the memory that is reserved for database caching and transaction log caching. The most important elements in shared memory are shared buffers and WAL buffers:

  • Shared buffers: The purpose of a shared buffer is to minimize DISK I/O. For this purpose, the following principles must be met:
    • You need to access very large buffers (that is, tens or hundreds of gigabytes worth) quickly.
    • You should minimize contention when many users access it at the same time.
    • Frequently used blocks must remain in the buffer for as long as possible.
  • WAL buffers: The WAL buffer is a buffer that temporarily stores changes to the database:
    • The contents stored in the WAL buffer are written to the WAL file at a predetermined point in time.
    • From a backup and recovery point of view, WAL buffers and WAL files are very important.

PostgreSQL autovacuum

PostgreSQL includes the VACUUM statement, which reclaims waste storage using dead tuples inside the database. This is because when we delete tuples, they are not physically removed from their table until a vacuum is called. So, on frequently updated tables, we need to perform VACUUM periodically.

The PostgreSQL autovacuum launcher is an optional daemon. By default, it is enabled inside the PostgreSQL configuration using the autovacuum = on parameter. Therefore, unless you turn it off, the autovacuum launcher will wake up every autovacuum_naptime seconds. 

Whenever the autovacuum occurs, by default, every 1 minute, it invokes multiple processes. The number of concurrent vacuum processes is defined by the autovacuum_worker parameter value inside the PostgreSQL configuration file. The vacuum work does not lock on any tables, hence it does not interrupt other database tasks. 

The PostgreSQL writer and checkpointer

Inside the shared buffer, there are pages that have been modified, but their changes have not been written to disk yet; those pages are called dirty pages. A checkpoint is a specific point in time when all of the dirty pages in the shared buffer are written to disk by the checkpointer. So, if there is a crash, the database recovery will start from the most recent checkpoint where all the data was surely consistent. The PostgreSQL checkpointer flushes all of the dirty pages at certain intervals to create a checkpoint for necessary recovery.

There are two types of writers in PostgreSQL: the WAL writer and the background writer. Between the two checkpoints, the background writer flushes some dirty pages to disk so that there are always enough clean pages to reduce the amount of blocking that occurs during the queries. 

PostgreSQL does not write the updates directly to the data files on disk but on commit; these updates are written to the WAL file sequentially, and later, the checkpointer will write all the dirty blocks to each of the respective data files because data file writing cannot be sequential, so users will not have to wait for the delay to locate the data block inside a data file.

Hence, the WAL writer will write our updates from the shared buffer to the WAL file.

PostgreSQL process types

PostgreSQL has four types of processes: 

  • The postmaster process: The postmaster process is the first process to begin when you start PostgreSQL. It performs recovery, initializes shared memory, and runs background processes. It also creates a backend process when there is a connection request from the client process.
  • The background process: The list of background processes required for PostgreSQL operations can be found in the following table:
Process Role
Logger This writes the error message to the log file.
Checkpointer When a checkpoint occurs, the dirty data inside the shared buffer is written to the file.
Writer This periodically writes the dirty buffer to a file.
Wal writer This writes the WAL buffer to the WAL file.
Autovacuum launcher

This launches a new worker process and carries out vacuum operations on bloated tables.

Archiver When in Archive.log mode, this copies the WAL file to the specified directory.
Stats collector This collects DBMS usage statistics such as session execution information (pg_stat_activity) and table usage statistical information (pg_stat_all_tables).

Table 1.1 – The background processes of PostgreSQL

  • The backend process: The backend process performs the query request of the user process and then transmits the result. Some memory structures are required for query execution; this is called local memory.
  • The client process: The client process refers to the background process that is assigned for every backend user connection. Usually, the postmaster process will fork a child process that is dedicated to serving a user connection.

Managing HA in PostgreSQL

Managing HA in PostgreSQL is very important to ensure that database clusters maintain exceptional uptime and strong operational performance so that your data is always available to the application.

Master-slave might be the most basic and easiest HA architecture for developers. It is based on one master database with one or more standby servers. These standby databases will remain synchronized (or almost synchronized) with the master, depending on whether the replication is synchronous or asynchronous.

It is important to understand that PostgreSQL does not have a native mechanism to control failovers, that is, when the master fails automatically, the standby server becomes the new master with downtime as close to 0 as possible. To perform this procedure, there are third-party tools such as repmgr, pgpool-II, or patroni, to name a few.

These tools are placed in a layer above PostgreSQL, and they control the health status of the master; when a problem occurs, these tools fire a series of actions to promote the standby server as the new master.

There are several ways to classify a standby database:

  • By the nature of the replication:
    - Physical standbys: Disk blocks are copied.
    - Logical standbys: The streaming of the data changes.
  • By the synchronicity of the transactions:
    - Asynchronous: There is a possibility of data loss.
    - Synchronous: There is no possibility of data loss; the commits in the master wait for the response of the standby.
  • By usage:
    - Hot standbys: Support read-only connections; the hot standbys are configured for synchronous-commit mode, so their master server must wait for the hot standbys to confirm that they have consolidated the transaction log (when a commit statement is performed on the master, the progress of the commit will only be completed after all of the hot standbys have finished consolidating the transaction log).
    Async standbys: These kinds of servers are configured by asynchronous-commit mode; therefore, the master server will not wait for the async standbys to consolidate the transaction log (when a commit statement is performed on the master, the progress of the commit will not wait for these async standbys to consolidate).

PostgreSQL uses a stream of WAL records to synchronize the standby databases. They can be synchronous or asynchronous, and the entire database server is replicated.

However, a master-slave setup is not enough to effectively ensure HA, as we also need to handle failures. To handle failures, we need to be able to detect them. Once we know there is a failure, for example, errors on the master, or the master is not responding, then we can select a slave and failover mechanism to it with the smallest amount of delay possible. It is important that this process is as efficient as possible, in order to restore full functionality so that the applications can start functioning again. PostgreSQL itself does not include an automatic failover mechanism, so it will require some custom script or third-party tools for this automation.

After a failover happens, the applications need to be notified accordingly so that they can start using the new master. Additionally, we need to evaluate the state of our architecture after a failover because we can run into a situation where we only have the new master running (for instance, we had a master and only one slave before the issue). In that case, we will need to somehow add a slave so as to recreate the master-slave setup we originally had for HA.

Benchmarking PostgreSQL performance

PostgreSQL is the most advanced open source database, hence, PostgreSQL performance should be the first option to evaluate.

Developers benchmark a PostgreSQL database so that they can check the capability and behavior of the database against their application. Based on the benchmarking plan, different hardware can yield different results. It is very important to separate the benchmarked database server from other servers, such as the servers generating the load or the servers collecting performance metrics. As a part of the benchmarking results, developers will obtain application characteristics, such as the following:

  • Is the application read/write-intensive?
  • How is the read/write split (60:40)? 
  • How large is the dataset?
  • Is the data and structure representative of the actual production database?

Key performance factors in PostgreSQL

A production environment is consolidated with different components, hardware such as CPU/memory, and operating systems. PostgreSQL is installed to communicate with other components of the production environment. Please bear in mind that if the production environment is not properly configured, the overall performance will be degraded. 

Some PostgreSQL queries can run faster or slower depending on the configuration that has been set. The goal of database performance benchmarking is to achieve the largest possible throughput. The key performance factors that affect a database are the workload, resources, optimization, and contention. The workload includes batch jobs, dynamic queries for online transactions, and data analytics queries that are used to generate reports. The workload will be different according to the period of each day, week, or month, and it also depends on the applications. The optimization of every database is unique: it can be a database-level configuration or query-level optimization. Contention refers to the conflict condition because two or more components of the workload could attempt to use a single resource at the same time. If contention increases, the throughput will decrease.

Using pgbench for PostgreSQL benchmarking

The pgbench application is a standard benchmarking tool of the software provided alongside the official distribution of PostgreSQL. Using pgbench, we repeatedly execute given SQL commands that measure the following performance aspects of your PostgreSQL databases:

  • Memory performance and disk performance
  • Read/write performance
  • Connection performance
  • Ad hoc queries' performance (that is, special queries for the needs of services or operational information)

Monitoring PostgreSQL databases

There are several key metrics that you will definitely want to keep track of when it comes to database performance, and they are not all database-specific.

You should keep track of shared buffer usage while reading or updating data. In the shared buffer cache, PostgreSQL checks for the execution of a request. It will take data from the disk if the block is not found in the shared buffer cache. Once this is done, the data will be cached in the shared buffer cache of the database.

Implementing effective database monitoring can offer benefits such as increasing performance, increasing the availability of the application, faster database outage detection, and accurately analyzing storage requirements and index performance.

The DevOps environment for PostgreSQL

DevOps is a new and revolutionary way in which to deploy applications and services, including tools for automation, continuous development, integration, and deployment. Usually, frequent delays can occur in Agile deployment sprints when databases are added to the continuous integration and deployment stages, especially PostgreSQL databases. So, DevOps can provide various tools for PostgreSQL database deployment, automation, and scalability.

Among DevOps' tools, Vagrant and VirtualBox are two pieces of software that focus on automation tasks. They allow you to develop and manage virtual machines and environments in a single workflow. They reduce the development environment setup time. Additionally, the benefits of PostgreSQL in a virtual machine are that they are able to standardize the operating system to provide a consistent environment, and they will completely isolate unrelated databases from each other. Using Vagrant and VirtualBox, it only takes a couple of minutes to create a fresh PostgreSQL database server.

Also, Vagrant integrates with other DevOps configuration management tools such as Ansible and Terraform; these are two infrastructure tools that provide script templates to start PostgreSQL database clusters automatically on the cloud. Ansible and Terraform are good ways in which to automate PostgreSQL in order to speed up the development process. 

The goal of database automation by DevOps is clear: your PostgreSQL servers will appear the same. Therefore, your PostgreSQL servers will obtain consistency through the quality assurance, staging, and production phases. Puppet is also a well-known DevOps tool that we can use to manage PostgreSQL configuration files, the Postgres user, and PostgreSQL backup crontabs. In comparison, Jenkins, a DevOps tool, can be used to deploy and test virtually any software project. Jenkins can also contribute to the automation of PostgreSQL test databases during software testing tasks.

PostgreSQL testing frameworks

PostgreSQL testing frameworks allow developers to process automated tests for existing stored procedures with unit tests and Test-Driven Development (TDD). Because all test cases are saved in our PostgreSQL databases, they will not be stored by files, version control utilities, or external resources. One test case is a group of many database tests, and each test case is executed independently. After a test case has been executed, the changes that it has made are automatically rolled back.

There are four famous test frameworks for PostgreSQL, as follows: 

  • pgTap: This is a unit test framework for PostgreSQL.
  • PGUnit: This is a simpler testing tool than pgTap.
  • Simple pgunit: This offers another approach for unit tests.
  • Testgres: This is a Python testing framework for PostgreSQL.

These testing frameworks allow users to really verify the structure of their database schema, to exercise any views, procedures, functions, rules, or triggers. Furthermore, the testgres Python tool can also start a replica for a PostgreSQL database and even execute performance benchmarks for the database.

Summary

In this chapter, we learned about PostgreSQL development. We discovered how PostgreSQL is a popular DBaaS among the clouds and explored the good features of PostgreSQL through various application environments such as NodeJS, Django, PostGIS, and PostgREST. Next, we explored the administration of PostgreSQL by understanding the PostgreSQL architecture, PostgreSQL HA clusters, and PostgreSQL performance. Later, we learned the definition and/or principles of PostgreSQL database monitoring and introduced various DevOps tools for PostgreSQL. Finally, we developed an understanding of popular PostgreSQL database testing frameworks.

In the next chapter, we will learn how to set up a PostgreSQL RDS on a famous cloud  AWS  for ATM machines. This RDS will then be used in the remaining chapters of this book.

Left arrow icon Right arrow icon

Key benefits

  • Build real-world enterprise database management systems using Postgres 12 features
  • Explore the development, administrative and security aspects of PostgreSQL 12
  • Implement best practices from industry experts to build powerful database applications

Description

PostgreSQL is an open-source object-relational database management system (DBMS) that provides enterprise-level services, including high performance and scalability. This book is a collection of unique projects providing you with a wealth of information relating to administering, monitoring, and testing PostgreSQL. The focus of each project is on both the development and the administrative aspects of PostgreSQL. Starting by exploring development aspects such as database design and its implementation, you’ll then cover PostgreSQL administration by understanding PostgreSQL architecture, PostgreSQL performance, and high-availability clusters. Various PostgreSQL projects are explained through current technologies such as DevOps and cloud platforms using programming languages like Python and Node.js. Later, you’ll get to grips with the well-known database API tool, PostgREST, before learning how to use popular PostgreSQL database testing frameworks. The book is also packed with essential tips and tricks and common patterns for working seamlessly in a production environment. All the chapters will be explained with the help of a real-world case study on a small banking application for managing ATM locations in a city. By the end of this DBMS book, you’ll be proficient in building reliable database solutions as per your organization's needs.

Who is this book for?

This PostgreSQL book is for database developers, database administrators, data architects, or anyone who wants to build end-to-end database projects using Postgres. This book will also appeal to software engineers, IT technicians, computer science researchers, and university students who are interested in database development and administration. Some familiarity with PostgreSQL and Linux is required to grasp the concepts covered in the book effectively.

What you will learn

  • Set up high availability PostgreSQL database clusters in the same containment, a cross-containment, and on the cloud
  • Monitor the performance of a PostgreSQL database
  • Create automated unit tests and implement test-driven development for a PostgreSQL database
  • Develop PostgreSQL apps on cloud platforms using DevOps with Python and Node.js
  • Write robust APIs for PostgreSQL databases using Python programming, Node.js, and PostgREST
  • Create a geospatial database using PostGIS and PostgreSQL
  • Implement automatic configuration by Ansible and Terraform for Postgres

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Aug 13, 2021
Length: 440 pages
Edition : 1st
Language : English
ISBN-13 : 9781838648145
Vendor :
PostgreSQL Global Development Group
Category :
Languages :
Tools :

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing

Product Details

Publication date : Aug 13, 2021
Length: 440 pages
Edition : 1st
Language : English
ISBN-13 : 9781838648145
Vendor :
PostgreSQL Global Development Group
Category :
Languages :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total $ 141.97
Learn PostgreSQL
$48.99
Developing Modern Database Applications with PostgreSQL
$48.99
PostgreSQL 13 Cookbook
$43.99
Total $ 141.97 Stars icon
Banner background image

Table of Contents

15 Chapters
Section 1 - Introducing PostgreSQL Development and Administration Chevron down icon Chevron up icon
Introduction to PostgreSQL Development and Administration Chevron down icon Chevron up icon
Section 2 - Development in PostgreSQL Chevron down icon Chevron up icon
Setting Up a PostgreSQL RDS for ATMs Chevron down icon Chevron up icon
Using PostgreSQL and Node.js for Banking Transactions Chevron down icon Chevron up icon
Managing Bank ATM Locations Using PostgreSQL and Django Chevron down icon Chevron up icon
Creating a Geospatial Database Using PostGIS and PostgreSQL Chevron down icon Chevron up icon
Managing Banking Transactions using PostgREST Chevron down icon Chevron up icon
Section 3 - Administration in PostgreSQL Chevron down icon Chevron up icon
PostgreSQL with DevOps for Continuous Delivery Chevron down icon Chevron up icon
PostgreSQL High Availability Clusters Chevron down icon Chevron up icon
High-Performance Team Dashboards Using PostgreSQL and New Relic Chevron down icon Chevron up icon
Testing the Performance of Our Banking App with PGBench and JMeter Chevron down icon Chevron up icon
Test Frameworks for PostgreSQL Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Half star icon Empty star icon 3.8
(8 Ratings)
5 star 12.5%
4 star 75%
3 star 0%
2 star 0%
1 star 12.5%
Filter icon Filter
Top Reviews

Filter reviews by




filippos Sep 14, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This book is trying to bridge the gap between postgres/devops/application engineers and developers and it succesfully does so as you can actually build a whole application just by following the recipes on it. Initially you will create an RDS instance and then it's time for the application side where Node.js/django/postgis or even you own APIs will be explained in detail. At this point you need to make sure you can re-deploy your instance if needed and build a resilient infrastructure. Here notions like automation (jenkins,ansible,puppet,terraform etc) are in the first place. After this is completed, it's time for monitoring and dashboards. Finally some performance testing to make sure your app performs as expected are required and also some testing to QA purposes. I was amazed by how much content a book of 400 pages can include and by how reading chapter after chapter you are able to get all this content in the proper order, exactly as it would be used in a real life scenario. It's obvious that the authors have a lot of experience in the field. Reading from a postgres engineer's view i gained a lot of insight on how the app side of things work. I am sure next time i will have to debug some issue i will understand much better how each part of the system works individually but also a whole. If i could make a small suggestion i would have liked a little more info/examples on the postgis chapter. However,it's a great book overall and it's highly recommended for anyone working directly or indirectly with postgres.
Amazon Verified review Amazon
Sai Aug 16, 2021
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
This is a well written book, it certainly assumes the reader has a certain level of understanding of the topics. Most books tend to be from an academic standpoint but this one doesn't and thats what I loved about it, if you are a DBA , it is a must read ! For developers , it give a pretty good understanding of the use case and implementation principles.
Amazon Verified review Amazon
Anurag Oct 25, 2021
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
This book is for anyone who is looking to get their hands dirty on PostgreSQL and get up to the speed quickly. In today's world, technology is changing so fast that it is sometimes intimidating for the new starters in any given area. Lack of information and too much information both are counter productive. Author did a great job by providing the right content needed to make any projects working. What I liked most about this book is the approach author used to explain different concepts by using mini projects on different platforms and technology. Given the popularity of cloud databases, author also covered in detail about AWS RDS databases and examples of available extensions. In conclusion, this book is helpful at the each phase of software development life cycle and that is why it useful to everyone irrespective of their role. DBA, developer, tester and technical leaders, no matter what is your role but if you are new to PostgreSQL and exploring to use it in your applications then this book is for you.
Amazon Verified review Amazon
Rafayel Aug 16, 2021
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
As a PostgreSQL DBA, I can claim that the book is well written and has interesting implementations of PostgreSQL. This book is not for beginners. It requires the reader to have basic knowledge and some experience in PostgreSQL administration, ops, scripting etc.What is particularly good about this book is that it's written based on real practice. It's not an academic book coming from PostgreSQL's documentation and about how things should work.As a DBA, I would have liked to see more examples of best practices. I would have liked to read about security as well. In this book I didn't find any security related chapters and I think it is one of the most important aspects in modern applications.Overall, I think the book is for developers, who need to learn more about PostgreSQL implementation's.
Amazon Verified review Amazon
Joseph Oct 22, 2021
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
This is a nicely written book that covers a lot of areas pertaining to application development alongside the concepts of provisioning and managing the PostgreSQL database as backend. Some of the topics covered would be digestible for developers with some experience rather than newbies. The step-by-step instructions for the sample projects are appreciable.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is included in a Packt subscription? Chevron down icon Chevron up icon

A subscription provides you with full access to view all Packt and licnesed content online, this includes exclusive access to Early Access titles. Depending on the tier chosen you can also earn credits and discounts to use for owning content

How can I cancel my subscription? Chevron down icon Chevron up icon

To cancel your subscription with us simply go to the account page - found in the top right of the page or at https://subscription.packtpub.com/my-account/subscription - From here you will see the ‘cancel subscription’ button in the grey box with your subscription information in.

What are credits? Chevron down icon Chevron up icon

Credits can be earned from reading 40 section of any title within the payment cycle - a month starting from the day of subscription payment. You also earn a Credit every month if you subscribe to our annual or 18 month plans. Credits can be used to buy books DRM free, the same way that you would pay for a book. Your credits can be found in the subscription homepage - subscription.packtpub.com - clicking on ‘the my’ library dropdown and selecting ‘credits’.

What happens if an Early Access Course is cancelled? Chevron down icon Chevron up icon

Projects are rarely cancelled, but sometimes it's unavoidable. If an Early Access course is cancelled or excessively delayed, you can exchange your purchase for another course. For further details, please contact us here.

Where can I send feedback about an Early Access title? Chevron down icon Chevron up icon

If you have any feedback about the product you're reading, or Early Access in general, then please fill out a contact form here and we'll make sure the feedback gets to the right team. 

Can I download the code files for Early Access titles? Chevron down icon Chevron up icon

We try to ensure that all books in Early Access have code available to use, download, and fork on GitHub. This helps us be more agile in the development of the book, and helps keep the often changing code base of new versions and new technologies as up to date as possible. Unfortunately, however, there will be rare cases when it is not possible for us to have downloadable code samples available until publication.

When we publish the book, the code files will also be available to download from the Packt website.

How accurate is the publication date? Chevron down icon Chevron up icon

The publication date is as accurate as we can be at any point in the project. Unfortunately, delays can happen. Often those delays are out of our control, such as changes to the technology code base or delays in the tech release. We do our best to give you an accurate estimate of the publication date at any given time, and as more chapters are delivered, the more accurate the delivery date will become.

How will I know when new chapters are ready? Chevron down icon Chevron up icon

We'll let you know every time there has been an update to a course that you've bought in Early Access. You'll get an email to let you know there has been a new chapter, or a change to a previous chapter. The new chapters are automatically added to your account, so you can also check back there any time you're ready and download or read them online.

I am a Packt subscriber, do I get Early Access? Chevron down icon Chevron up icon

Yes, all Early Access content is fully available through your subscription. You will need to have a paid for or active trial subscription in order to access all titles.

How is Early Access delivered? Chevron down icon Chevron up icon

Early Access is currently only available as a PDF or through our online reader. As we make changes or add new chapters, the files in your Packt account will be updated so you can download them again or view them online immediately.

How do I buy Early Access content? Chevron down icon Chevron up icon

Early Access is a way of us getting our content to you quicker, but the method of buying the Early Access course is still the same. Just find the course you want to buy, go through the check-out steps, and you’ll get a confirmation email from us with information and a link to the relevant Early Access courses.

What is Early Access? Chevron down icon Chevron up icon

Keeping up to date with the latest technology is difficult; new versions, new frameworks, new techniques. This feature gives you a head-start to our content, as it's being created. With Early Access you'll receive each chapter as it's written, and get regular updates throughout the product's development, as well as the final course as soon as it's ready.We created Early Access as a means of giving you the information you need, as soon as it's available. As we go through the process of developing a course, 99% of it can be ready but we can't publish until that last 1% falls in to place. Early Access helps to unlock the potential of our content early, to help you start your learning when you need it most. You not only get access to every chapter as it's delivered, edited, and updated, but you'll also get the finalized, DRM-free product to download in any format you want when it's published. As a member of Packt, you'll also be eligible for our exclusive offers, including a free course every day, and discounts on new and popular titles.