Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Creative Projects for Rust Programmers
Creative Projects for Rust Programmers

Creative Projects for Rust Programmers: Build exciting projects on domains such as web apps, WebAssembly, games, and parsing

eBook
$24.99 $35.99
Paperback
$48.99
Subscription
Free Trial
Renews at $19.99p/m

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing
Table of content icon View table of contents Preview book icon Preview Book

Creative Projects for Rust Programmers

Storing and Retrieving Data

A typical need of any software application is to input/output data by reading/writing data files or data streams or by querying/manipulating a database. Regarding files and streams, unstructured data, or even binary data, is hard to manipulate, and so they are not recommended.

Also, proprietary data formats are not recommended because of the vendor lock-in risk, and so only standard data formats should be used. Fortunately, there are free Rust libraries that come to the rescue in these situations. There are Rust crates available to manipulate some of the most popular file formats, such as TOML, JSON, and XML.

In terms of databases, there are Rust crates to manipulate data using some of the most popular databases, such as SQLite, PostgreSQL, and Redis.

In this chapter, you will learn about the following:

  • How to read configuration data from a TOML file
  • How to read or write a JSON data file
  • How to read an XML data file
  • How to query or manipulate data in a SQLite database
  • How to query or manipulate data in a PostgreSQL database
  • How to query or manipulate data in a Redis database

Technical requirements

It is required for you to install the SQLite runtime library when you're running the SQLite code. However, it is also useful (although not required) to install a SQLite interactive manager. You can download the precompiled binaries of SQLite tools from https://www.sqlite.org/download.html. However, version 3.11 or higher would be ideal.

Please note that if you're using Debian-derived Linux distribution, the libsqlite3-dev package should be installed.

It is also required for you to install and run the PostgreSQL Database Management System (DBMS) when you're running the PostgreSQL code. As with SQLite, it is useful but not required to install a PostgreSQL interactive manager. You can download the precompiled binary of PostgreSQL DBMS from https://www.postgresql.org/download/. However, version 7.4 or higher would be acceptable.

Installing and running the Redis server is necessary when you're running the Redis code. You can download it from https://redis.io/download.

The complete source code for this chapter can be found in the Chapter02 folder of the repository at https://github.com/PacktPublishing/Creative-Projects-for-Rust-Programmers. In this folder, there is a sub-folder for every project, plus a folder named data, which contains the data that we'll use as input for the projects.

Project overview

In this chapter, we'll look at how to build a program that loads a JSON file and an XML file into three databases: a SQLite database, a PostgreSQL database, and a Redis key-value store. To avoid hardwiring the names and positions of the files and the database credentials into the program, we are going to load them from a TOML configuration file.

The final project is named transformer, but we'll explain this through several preliminary small projects:

  • toml_dynamic and toml_static: These read a TOML file in two different ways.
  • json_dynamic and json_static: These read a JSON file in two different ways.
  • xml_example: This reads an XML file.
  • sqlite_example: This creates two tables in a SQLite database, inserts records into them, and queries them.
  • postgresql_example: This creates two tables in a PostgreSQL database, inserts records into them, and queries them.
  • redis_example: This adds some data to a key-value store and queries it.

Reading a TOML file

One simple and maintainable way to store information in a filesystem is to use a text file. This is also very efficient for data spanning no more than 100 KB. However, there are several competing standards for storing information in text files, such as INI, CSV, JSON, XML, YAML, and others.

The one used by Cargo is TOML. This is a really powerful format that is used by many Rust developers to store the configuration data of their apps. It is designed to be written by hand, using a text editor, but it can also be written by an application very easily.

The toml_dynamic and toml_static projects (using the toml crate) load data from a TOML file. Reading a TOML file is useful when configuring a software application, and this is what we'll do. We will use the data/config.toml file, which contains all of the parameters for the projects of this chapter.

You can also create or modify a TOML file by using code, but we are not going to do that. Being able to modify a TOML file can be useful in some scenarios, such as to save user preferences.

It is important to consider that when a TOML file is changed by a program, it undergoes dramatic restructuring:

  • It acquires specific formatting, which you may dislike.
  • It loses all of its comments.
  • Its items are sorted alphabetically.

So, if you want to use the TOML format both for manually edited parameters and for program-saved data, you would be better off using two distinct files:

  • One edited only by humans
  • One edited primarily by your software, but occasionally also by humans

This chapter describes two projects in which a TOML file is read using different techniques. These techniques are to be used in two different cases:

  • In a situation where we are not sure which fields are contained in the file, and so we want to explore it. In this case, we use the toml_dynamic program.
  • In another situation where, in our program, we describe exactly which fields should be contained in the file and we don't accept a different format. In this case, we use the toml_static program.

Using toml_dynamic

The purpose of this section is to read the config.toml file, located in the data folder, when we want to explore the content of that file. The first three lines of this file are as follows:

[input]
xml_file = "../data/sales.xml"
json_file = "../data/sales.json"

After these lines, the file contains other sections. Among them is the [postgresql] section, which contains the following line:

database = "Rust2018"

To run this project, enter the toml_dynamic folder and type in cargo run ../data/config.toml. A long output should be printed. It will begin with the following lines:

Original: Table(
{
"input": Table(
{
"json_file": String(
"../data/sales.json",
),
"xml_file": String(
"../data/sales.xml",
),
},
),

Notice that this is just a verbose representation of the first three lines of the config.toml file. This output proceeds with emitting a similar representation for the rest of the file. After having printed the whole data structure representing the file that is read, the following line is added to the output:

 [Postgresql].Database: Rust2018

This is the result of a specific query on the data structure loaded when the file is read.

Let's look at the code of the toml_dynamic program:

  1. Declare a variable that will contain a description of the whole file. This variable is initialized in the next three statements:
let config_const_values =
  1. We add the pathname of the file from the first argument in the command line to config_path. Then, we load the contents of this file into the config_text string and we parse this string into a toml::Value structure. This is a recursive structure because it can have a Value property among its fields:
{
let config_path = std::env::args().nth(1).unwrap();
let config_text =
std::fs::read_to_string(&config_path).unwrap();
config_text.parse::<toml::Value>().unwrap()
};
  1. This structure is then printed using the debug structured formatting (:#?), and a value is retrieved from it:
println!("Original: {:#?}", config_const_values);
println!("[Postgresql].Database: {}",
config_const_values.get("postgresql").unwrap()
.get("database").unwrap()
.as_str().unwrap());

Notice that to get the value of the "database" item contained the "postgresql" section, a lot of code is required. The get function needs to look for a string, which may fail. That is the price of uncertainty.

Using toml_static

On the other hand, if we are quite sure of the organization of our TOML file, we should use another technique shown in the project, toml_static.

To run it, open the toml_static folder and type in cargo run ../data/config.toml. The program will only print the following line:

[postgresql].database: Rust2018

This project uses two additional crates:

  • serde: This enables the use of the basic serialization/deserialization operations.
  • serde_derive: This provides a powerful additional feature known as the custom-derive feature, which allows you to serialize/deserialize using a struct.

serde is the standard serialization/deserialization library. Serialization is the process of converting data structures of the program into a string (or a stream). Deserialization is the reverse process; it is the process of converting a string (or a stream) into some data structures of the program.

To read a TOML file, we need to use deserialization.

In these two projects, we don't need to use serialization as we are not going to write a TOML file.

In the code, first, a struct is defined for any section contained in the data/config.toml file. That file contains the Input, Redis, Sqlite, and Postgresql sections, and so we declare as many Rust structs as the sections of the file we want to read; then, the Config struct is defined to represent the whole file, having these sections as members.

For example, this is the structure for the Input section:

#[allow(unused)]
#[derive(Deserialize)]
struct Input {
xml_file: String,
json_file: String,
}

Notice that the preceding declaration is preceded by two attributes.

The allow(unused) attribute is used to prevent the compiler from warning us about unused fields in the following structure. It is convenient for us to avoid these noisy warnings. The derive(Deserialize) attribute is used to activate the automatic deserialization initiated by serde for the following structure.

After these declarations, it is possible to write the following line of code:

toml::from_str(&config_text).unwrap()

This invokes the from_str function, which parses the text of the file into a struct. The type of that struct is not specified in this expression, but its value is assigned to the variable declared in the first line of the main function:

 let config_const_values: Config =

So, its type is Config.

Any discrepancies between the file's contents and the struct type will be considered an error in this operation. So, if this operation is successful, any other operation on the structure cannot fail.

While the previous program (toml_dynamic) had a kind of dynamic typing, such as that of Python or JavaScript, this program has a kind of static typing, similar to Rust or C++.

The advantage of static typing appears in the last statement, where the same behavior as the long statement of the previous project is obtained by simply writing config_const_values.postgresql.database.

Reading and writing a JSON file

For storing data that is more complex than that which is stored in a configuration file, JSON format is more appropriate. This format is quite popular, particularly among those who use the JavaScript language.

We are going to read and parse the data/sales.json file. This file contains a single anonymous object, which contains two arrays—"products" and "sales".

The "products" array contains two objects, each one having three fields:

  "products": [
{
"id": 591,
"category": "fruit",
"name": "orange"
},
{
"id": 190,
"category": "furniture",
"name": "chair"
}
],

The "sales" array contains three objects, each one containing five fields:

"sales": [
{
"id": "2020-7110",
"product_id": 190,
"date": 1234527890,
"quantity": 2.0,
"unit": "u."
},
{
"id": "2020-2871",
"product_id": 591,
"date": 1234567590,
"quantity": 2.14,
"unit": "Kg"
},
{
"id": "2020-2583",
"product_id": 190,
"date": 1234563890,
"quantity": 4.0,
"unit": "u."
}
]

The information in the arrays is about some products to sell and some sale transactions associated with those products. Notice that the second field of each sale ("product_id") is a reference to a product, and so it should be processed after the corresponding product object has been created.

We will see a pair of programs with the same behavior. They read the JSON file, increment the quantity of the second sale object by 1.5, and then save the whole updated structure into another JSON file.

Similarly to the TOML format case, there can also be a dynamic parsing technique used for JSON files, where the existence and type of any data field is checked by the application code, and a static parsing technique, where it uses the deserialization library to check the existence and type of any field.

So, we have two projects: json_dynamic and json_static. To run each of them, open its folder and type in cargo run ../data/sales.json ../data/sales2.json. The program will not print anything, but it will read the first file specified in the command line and create the second file that is specified.

The created file is similar to the read file, but with the following differences:

  • The fields of the file created by json_dynamic are sorted in alphabetical order, while the fields of the file created by json_static are sorted in the same order as in the Rust data structure.
  • The quantity of the second sale is incremented from 2.14 to 3.64.
  • The final empty line is removed in both created files.

Now, we can see the implementations of the two techniques of serialization and deserialization.

The json_dynamic project

Let's look at the source code of the project:

  1. This project gets the pathnames of two files from the command line—the existing JSON file ("input_path") to read into a memory structure and a JSON file to create ("output_path") by saving the loaded structure, after having modified it a bit.
  2. Then, the input file is loaded into the string named sales_and_products_text and the generic serde_json::from_str::<Value> function is used to parse the string into a dynamically typed structure representing the JSON file. This structure is stored in the sales_and_products local variable.

Imagine that we want to change the quantity sold by the second sale transaction, incrementing it by 1.5 kilograms:

  1. First, we must get to this value using the following expression:
sales_and_products["sales"][1]["quantity"]
  1. This retrieves the "sales" sub-object of the general object. It is an array containing three objects.
  1. Then, this expression gets the second item (starting from zero ([1])) of this array. This is an object representing a single sale transaction.
  2. After this, it gets the "quantity" sub-object of the sale transaction object.
  3. The value we have reached has a dynamic type that we think should be serde_json::Value::Number, and so we make a pattern matching with this type, specifying the if let Value::Number(n) clause.
  4. If all is good, the matching succeeds and we get a variable named n—containing a number, or something that can be converted into a Rust floating-point number by using the as_f64 function. Lastly, we can increment the Rust number and then create a JSON number from it using the from_f64 function. We can then assign this object to the JSON structure using the same expression we used to get it:
sales_and_products["sales"][1]["quantity"]
= Value::Number(Number::from_f64(
n.as_f64().unwrap() + 1.5).unwrap());
  1. The last statement of the program saves the JSON structure to a file. Here, the serde_json::to_string_pretty function is used. As the name suggests, this function adds formatting whitespace (blanks and new lines) to make the resulting JSON file more human-readable. There is also the serde_json::to_string function, which creates a more compact version of the same information. It is much harder for people to read, but it is somewhat quicker to process for a computer:
std::fs::write(
output_path,
serde_json::to_string_pretty(&sales_and_products).unwrap(),
).unwrap();

The json_static project

If, for our program, we are sure that we know the structure of the JSON file, a statically typed technique can and should be used instead. It is shown in the json_static project. The situation here is similar to that of the projects processing the TOML file.

The source code of the static version first declares three structs—one for every object type contained in the JSON file we are going to process. Each struct is preceded by the following attribute:

#[derive(Deserialize, Serialize, Debug)]
Let's understand the preceding snippet:
  • The Deserialize trait is required to parse (that is, read) JSON strings into this struct.
  • The Serialize trait is required to format (that is, write) this struct into a JSON string.
  • The Debug trait is just handy for printing this struct on a debug trace.

The JSON string is parsed using the serde_json::from_str::<SalesAndProducts> function. Then, the code to increment the quantity of sold oranges becomes quite simple:

sales_and_products.sales[1].quantity += 1.5

The rest of the is unchanged.

Reading an XML file

Another very text format is XML. Unfortunately, there is no stable serialization/deserialization library to manage XML format. However, this is not necessarily a shortcoming. In actual fact, XML format is often used to store large datasets; so large, in fact, that it would be inefficient to load them all before we start converting the data into an internal format. In these cases, it may be more efficient to scan the file or incoming stream and process it as long as it is read.

The xml_example project is a rather convoluted program that scans the XML file specified on the command line and, in a procedural fashion, loads information from the file into a Rust data structure. It is meant to read the ../data/sales.xml file. This file has a structure corresponding to the JSON file we sought in the previous section. The following lines show an excerpt of that file:

<?xml version="1.0" encoding="utf-8"?>
<sales-and-products>
<product>
<id>862</id>
</product>
<sale>
<id>2020-3987</id>
</sale>
</sales-and-products>

All XML files have a header in the first line and then one root element; in this case, the root element it is named sales-and-products. This element contains two kinds of elements—product and sale. Both kinds of elements have specific sub-elements, which are the fields of the corresponding data. In this example, only the id fields are shown.

To run the project, open its folder and type in cargo run ../data/sales.xml. Some lines will be printed on the console. The first four of them should be as follows:

Got product.id: 862.
Got product.category: fruit.
Got product.name: cherry.
Exit product: Product { id: 862, category: "fruit", name: "cherry" }

These describe the contents of the specified XML file. In particular, the program found a product with ID 862, then it detected that it is a fruit, then that it is a cherry, and then, when the whole product had been read, the whole struct representing the product was printed. A similar output will appear for sales.

The parsing is performed using only the xml-rs crate. This crate enables a mechanism of parsing, shown in the following code excerpt:

let file = std::fs::File::open(pathname).unwrap();
let file = std::io::BufReader::new(file);
let parser = EventReader::new(file);
for event in parser {
match &location_item {
LocationItem::Other => ...
LocationItem::InProduct => ...
LocationItem::InSale => ...
}
}

An object of the EventReader type scans the buffered file and it generates an event whenever a step is performed in the parsing. The application code handles these kinds of events according to their needs.

The word event is used by this crate, but the word transition would probably be a better description of the data extracted by the parser.

A complex language is hard to parse, but for languages as simple as our data, the situation during the parsing can be modeled by a state machine. To that purpose, three enum variables are declared in the source code: location_item, with the LocationItem type; location_product, with the LocationProduct type; and location_sale, with the LocationSale type.

The first one indicates the current position of the parsing in general. We can be inside a product (InProduct), inside a sale (InSale), or outside of both (Other). If we are inside a product, the LocationProduct enum indicates the current position of parsing inside the current product. This can be within any of the allowed fields or outside of all of them. Similar states happen for sales.

The iteration encounters several kinds of events. The main ones are the following:

  • XmlEvent::StartElement: Signals that an XML element is beginning. It is decorated by the name of the beginning element and the possible attributes of that element.
  • XmlEvent::EndElement: Signals that an XML element is ending. It is decorated by the name of the ending element.
  • XmlEvent::Characters: Signals that the textual contents of an element is available. It is decorated by that available text.

The program declares a mutable product struct, with the Product type, and a mutable sale struct, with the Sale type. They are initialized with default values. Whenever there are some characters available, they are stored in the corresponding field of the current struct.

For example, consider a situation where the value of location_item is LocationItem::InProduct and the value of location_product is LocationProduct::InCategory—that is, we are in a category of a product. In this situation, there can be the name of the category or the end of the category. To get the name of the category, the code contains this pattern of a match statement:

Ok(XmlEvent::Characters(characters)) => {
product.category = characters.clone();
println!("Got product.category: {}.", characters);
}

In this statement, the characters variable gets the name of the category and a clone of it is assigned to the product.category field. Then, the name is printed to the console.

Accessing databases

Text files are good when they are small and when they don't need to be changed often. Actually, the only way that a text file can be changed is if you append something to the end of it or rewrite it completely. If you want to change the information in a large dataset quickly, the only way to do so is to use a database manager. In this section, we are going to learn how to manipulate a SQLite database with a simple example.

But first, let's look at three popular, broad categories of database managers:

  • Single-user databases: These store all of the databases in a single file, which must be accessible by the application code. The database code is linked into the application (it may be a static-link library or a dynamic-link library). Only one user at a time is allowed to access it, and all users have administrative privileges. To move the database anywhere, you simply move the file. The most popular choices in this category are SQLite and Microsoft Access.
  • DBMS: This is a process that has to be started as a service. Multiple clients can connect to it at the same time, and they can also apply changes at the same time without any data corruption. It requires more storage space, more memory, and much more start up time (for the server). There are several popular choices in this category, such as Oracle, Microsoft SQL Server, IBM DB2, MySQL, and PostgreSQL.
  • Key-value stores: This is a process that has to be started as a service. Multiple clients can connect to it at the same time and apply changes at the same time. It is essentially a large memory hash map that can be queried by other processes and that can optionally store its data in a file and reload it when it is restarted. This category is less popular than the other two, but it is gaining ground as the backend of high-performance websites. One of the most popular choices is Redis.

In the following sections, we are going to show you how to access SQLite single-user databases (in the sqlite_example project), PostgreSQL DBMSes (in the postgreSQL_example project), and Redis key-value stores (in the redis_example project). Then, in the transformer project, all three kinds of databases will be used together.

Accessing a SQLite database

The source code for this section is found in the sqlite_example project. To run it, open its folder and type in cargo run.

This will create the sales.db file in the current folder. This file contains a SQLite database. Then, it will create the Products and Sales tables in this database, it will insert a row into each of these tables, and it will perform a query on the database. The query asks for all the sales, joining each of them with its associated product. For each extracted row, a line will be printed onto the console, showing the timestamp of the sale, the weight of the sale, and the name of the associated product. As there is only one sale in the database, you will see just the following line printed:

At instant 1234567890, 7.439 Kg of pears were sold. 

This project only uses the rusqlite crate. Its name is a contraction of Rust SQLite. To use this crate, the Cargo.toml file must contain the following line:

rusqlite = "0.23"

Implementing the project

Let's look at how the code for the sqlite_example project works. The main function is quite simple:

fn main() -> Result<()> {
let conn = create_db()?;
populate_db(&conn)?;
print_db(&conn)?;
Ok(())
}

It invokes create_db to open or create a database with its empty tables, and to open and return a connection to this database.

Then, it invokes populate_db to insert rows into the tables of the database referred to by that connection.

Then, it invokes print_db to execute a query on this database and prints the data extracted by that query.

The create_db function is long but easy to understand:

fn create_db() -> Result<Connection> {
let database_file = "sales.db";
let conn = Connection::open(database_file)?;
let _ = conn.execute("DROP TABLE Sales", params![]);
let _ = conn.execute("DROP TABLE Products", params![]);
conn.execute(
"CREATE TABLE Products (
id INTEGER PRIMARY KEY,
category TEXT NOT NULL,
name TEXT NOT NULL UNIQUE)",
params![],
)?;
conn.execute(
"CREATE TABLE Sales (
id TEXT PRIMARY KEY,
product_id INTEGER NOT NULL REFERENCES Products,
sale_date BIGINT NOT NULL,
quantity DOUBLE PRECISION NOT NULL,
unit TEXT NOT NULL)",
params![],
)?;
Ok(conn)
}

The Connection::open function simply uses a path to a SQLite database file to open a connection. If this file does not exist, it will be created. As you can see, the created sales.db file is very small. Typically, empty databases of DBMSes are 1,000 times larger.

To perform a data manipulation command, the execute method of the connection is called. Its first argument is a SQL statement, possibly containing some parameters, specified as $1, $2, $3, and so on. The second argument of the function is a reference to a slice of values that are used to replace such parameters.

Of course, if there are no parameters, the parameter values list must be empty. The first parameter value, which has an index of 0, replaces the $1 parameter, the second one replaces the $2 parameter, and so on.

Notice that the arguments of a parameterized SQL statement can be of different data types (numeric, alpha-numeric, BLOBs, and so on), but Rust collections can only contain objects of the same data type. Therefore, the params! macro is used to perform a bit of magic. The data type of the second argument of the execute method must be that of a collection that can be iterated over and whose items implement the ToSql trait. The objects implementing this trait, as its name implies, can be used as parameters of a SQL statement. The rusqlite crate contains an implementation of this trait for many Rust basic types, such as numbers and strings.

So, for example, the params!(34, "abc") expression generates a collection that can be iterated over. The first item of this iteration can be converted into an object containing the number 34, and that number can be used to replace a SQL parameter of a numeric type. The second item of this iteration can be converted into an object containing the "abc" string, and that string can be used to replace a SQL parameter of an alpha-numeric type.

Now, let's look at the populate_db function. It contains statements to insert rows into the database. Here is one of those statements:

conn.execute(
"INSERT INTO Products (
id, category, name
) VALUES ($1, $2, $3)",
params![1, "fruit", "pears"],
)?;

As explained before, this statement will have the effect of executing the following SQL statement:

INSERT INTO Products (
id, category, name
) VALUES (1, 'fruit', 'pears')

At last, we see the whole print_db function, which is more complex than the others:

fn print_db(conn: &Connection) -> Result<()> {
let mut command = conn.prepare(
"SELECT p.name, s.unit, s.quantity, s.sale_date
FROM Sales s
LEFT JOIN Products p
ON p.id = s.product_id
ORDER BY s.sale_date",
)?;
for sale_with_product in command.query_map(params![], |row| {
Ok(SaleWithProduct {
category: "".to_string(),
name: row.get(0)?,
quantity: row.get(2)?,
unit: row.get(1)?,
date: row.get(3)?,
})
})? {
if let Ok(item) = sale_with_product {
println!(
"At instant {}, {} {} of {} were sold.",
item.date, item.quantity, item.unit, item.name
);
}
}
Ok(())
}

To perform a SQL query, first, the SELECT SQL statement must be prepared by calling the prepare method of the connection, to convert it into an efficient internal format, with the Statement data type. This object is assigned to the command variable. A prepared statement must be mutable to allow the following replacement of parameters. In this case, however, we don't have any parameters.

A query can generate several rows, and we want to process one at a time, so we must create an iterator from this command. It is performed by calling the query_map method of the command. This method receives two arguments—a slice of parameter values and a closure—and it returns an iterator. The query_map function performs two jobs—first, it replaces the specified parameters, and then it uses the closure to map (or transform) each extracted row into a more handy structure. But in our case, we have no parameters to replace, and so we just create a specific structure with the SaleWithProduct type. To extract the fields from a row, the get method is used. It has a zero-based index on the fields specified in the SELECT query. This structure is the object returned by the iterator for any row extracted by the query, and it is assigned to the iteration variable named sale_with_product.

Now that we have learned how to access a SQLite database, let's check the PostgreSQL database management system.

Accessing a PostgreSQL database

What we did in the SQLite database is similar to what we will be doing in the PostgreSQL database. This is because they are both based on the SQL language, but mostly because SQLite is designed to be similar to PostgreSQL. It may be harder to convert an application from PostgreSQL into SQLite because the former has many advanced features that are not available in the latter.

In this section, we are going to convert the example from the previous section so that it works with a PostgreSQL database instead of SQLite. So, we'll explain the differences.

The source code for this section can be found in the postgresql_example folder. To run it, open its folder and type in cargo run. This will carry out essentially the same operations that we saw for sqlite_example, and so after creating and populating the database, it will print the following:

At instant 1234567890, 7.439 Kg of pears were sold.

Implementation of the project

This project only uses the crate named postgres. Its name is a popular contraction of the postgresql name.

Creating a connection to a PostgreSQL database is very different from creating a connection to a SQLite database. As the latter is only a file, you do so in a similar way to opening a file, and you should write Connection::open(<pathname of the db file>). Instead, to connect to a PostgreSQL database, you need access to a computer where a server is running, then access to the TCP port where that server is listening, and then you need to specify your credentials on this server (your username and password). Optionally, you can then specify which of the databases managed by this server you want to use.

So, the general form of the call is Connection::connect(<URL>, <TlsMode>), where the URL can be, for example, postgres://postgres:post@localhost:5432/Rust2018. The general form of the URL is postgres://username[:password]@host[:port][/database], where the password, the port, and the database parts are optional. The TlsMode argument specifies whether the connection must be encrypted.

The port is optional because it has a value of 5432 by default. Another difference is that this crate does not use the params! macro. Instead, it allows us to specify a reference to a slice. In this case, it is an empty slice (&[]) because we don't need to specify parameters.

The table creation and population process is similar to the way it was done for sqlite_example. The query is different, however. This is the body of the print_db function:

for row in &conn.query(
"SELECT p.name, s.unit, s.quantity, s.sale_date
FROM Sales s
LEFT JOIN Products p
ON p.id = s.product_id
ORDER BY s.sale_date",
&[],
)? {
let sale_with_product = SaleWithProduct {
category: "".to_string(),
name: row.get(0),
quantity: row.get(2),
unit: row.get(1),
date: row.get(3),
};
println!(
"At instant {}, {} {} of {} were sold.",
sale_with_product.date,
sale_with_product.quantity,
sale_with_product.unit,
sale_with_product.name
);
}

With PostgreSQL, the query method of the connection class carries out parameter substitution, similarly to the execute method, but it does not map the row to a structure. Instead, it returns an iterator, which can be immediately used in a for statement. Then, in the body of the loop, the row variable can be used (as it is in the example) to fill a struct.

As we now know how to access data in the SQLite and PostgreSQL databases, let's see how to store and retrieve data from a Redis store.

Storing and retrieving data from a Redis store

Some applications need a very fast response time for certain kinds of data; faster than what a DBMS can offer. Usually, a DBMS dedicated to one user would be fast enough, but for some applications (typically large-scale web applications) there are hundreds of concurrent queries and many concurrent updates. You can use many computers, but the data must be kept coherent among them, and keeping coherence can cause a bottleneck of performance.

A solution to this problem is to use a key-value store, which is a very simple database that can be replicated across a network. This keeps the data in memory to maximize the speed, but it also supports the option to save the data in a file. This avoids losing information if the server is stopped.

A key-value store is similar to the HashMap collection of the Rust standard library, but it is managed by a server process, which could possibly be running on a different computer. A query is a message exchanged between the client and a server. Redis is one of the most used key-value stores.

The source code for this project is found in the redis_example folder. To run it, open the folder and type in cargo run. This will print the following:

a string, 4567, 12345, Err(Response was of incompatible type: "Response type not string compatible." (response was nil)), false.

This simply creates a data store in the current computer and stores in it the following three key-value pairs:

  • "aKey", associated with "a string"
  • "anotherKey", associated with 4567
  • 45, associated with 12345

Then, it queries the store for the following keys:

  • "aKey", which obtains an "a string" value
  • "anotherKey", which obtains a 4567 value
  • 45, which obtains a 12345 value
  • 40, which obtains an error

Then, it queries whether the 40 key exists in the store, which obtains false.

Implementing the project

Only the redis crate is used in this project.

The code is quite short and simple. Let's look at how it works:

fn main() -> redis::RedisResult<()> {
let client = redis::Client::open("redis://localhost/")?;
let mut conn = client.get_connection()?;

First, a client must be obtained. The call to redis::Client::open receives a URL and just checks whether this URL is valid. If the URL is valid, a redis::Client object is returned, which has no open connections. Then, the get_connection method of the client tries to connect, and if it is successful, it returns an open connection.

Any connection essentially has three important methods:

  • set: This tries to store a key-value pair.
  • get: This tries to retrieve the value associated with the specified key.
  • exists: This tries to detect whether the specified key is present in the store, without retrieving its associated value.

Then, set is invoked three times, with different types for the key and value:

conn.set("aKey", "a string")?;
conn.set("anotherKey", 4567)?;
conn.set(45, 12345)?;

At last, get is invoked four times and exists is invoked once. The first three calls get the stored value. The fourth call specifies a non-existent value, so a null value is returned, which cannot be converted into String, as is required, and so an error is generated:

conn.get::<_, String>("aKey")?,
conn.get::<_, u64>("anotherKey")?,
conn.get::<_, u16>(45)?,
conn.get::<_, String>(40),
conn.exists::<_, bool>(40)?);

You can always check the error to find out whether your key is present, but a cleaner solution is to call the exists method, which returns a Boolean value specifying whether the key is present.

With this, we now know how Rust crates are used to access, store, and retrieve data using the most popular databases.

Putting it all together

You should now know enough to build an example that does what we described at the beginning of the chapter. We have learned the following:

  • How to read a TOML file to parameterize the program
  • How to load the data regarding products and sales into memory, specified in a JSON file and in an XML file
  • How to store all of this data in three places: a SQLite DB file, a PostgreSQL database, and a Redis key-value store

The source code of the complete example is found in the transformer project. To run it, open its folder and type in cargo run ../data/config.toml. If everything is successful, it will recreate and populate the SQLite database contained in the data/sales.db file, the PostgreSQL database, which can be accessed from localhost on port 5432 and is named Rust2018, and the Redis store, which can be accessed from localhost. Then, it will query the SQLite and PostgreSQL databases for the number of rows in their tables, and it will print the following:

SQLite #Products=4. 
SQLite #Sales=5.
PostgreSQL #Products=4.
PostgreSQL #Sales=5.

So, we have now seen a rather broad example of data manipulation.

Summary

In this chapter, we looked at some basic techniques to access data in popular text formats (TOML, JSON, and XML) or data managed by popular database managers (SQLite, PostgreSQL, and Redis). Of course, many other file formats and database managers exist, and there is still a lot to be learned about these formats and these database managers. Nevertheless, you should now have a grasp of what they do. These techniques are useful for many kinds of applications.

In the next chapter, we will learn how to build a web backend service using the REST architecture. To keep that chapter self-contained, we will only use a framework to receive and respond to web requests, and not use a database. Of course, that is quite unrealistic; but by combining those web techniques with the ones introduced in this chapter, you can build a real-world web service.

Questions

  1. Why is it not a good idea to change programmatically a TOML file edited by a user?
  2. When is it better to use a dynamically typed parsing of TOML or JSON files and when is it better to use statically typed parsing?
  3. When is it required to derive a structure from the Serialize and the Deserialize trait?
  1. What is a pretty generation of a JSON string?
  2. Why could it be better to use a stream parser, rather than a single-call parser?
  3. When is SQLite a better choice and when is it better to use PostgreSQL?
  4. Which is the type of the parameters passed with a SQL command to a SQLite database manager?
  5. What does the query method do on a PostgreSQL database?
  6. What are the names of the functions to read and write values in a Redis key-value store?
  7. Can you try to write a program that gets an ID from the command line, queries SQLite, PostgreSQL, or the Redis database for the ID, and prints some information regarding the data found?
Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Work through projects that will help you build high-performance applications with Rust
  • Delve into concepts such as error handling, memory management, concurrency, generics, and macros with Rust
  • Improve business productivity by choosing the right libraries and frameworks for your applications

Description

Rust is a community-built language that solves pain points present in many other languages, thus improving performance and safety. In this book, you will explore the latest features of Rust by building robust applications across different domains and platforms. The book gets you up and running with high-quality open source libraries and frameworks available in the Rust ecosystem that can help you to develop efficient applications with Rust. You'll learn how to build projects in domains such as data access, RESTful web services, web applications, 2D games for web and desktop, interpreters and compilers, emulators, and Linux Kernel modules. For each of these application types, you'll use frameworks such as Actix, Tera, Yew, Quicksilver, ggez, and nom. This book will not only help you to build on your knowledge of Rust but also help you to choose an appropriate framework for building your project. By the end of this Rust book, you will have learned how to build fast and safe applications with Rust and have the real-world experience you need to advance in your career.

Who is this book for?

This Rust programming book is for developers who want to get hands-on experience with implementing their knowledge of Rust programming, and are looking for expert advice on which libraries and frameworks they can adopt to develop software that typically uses the Rust language.

What you will learn

  • Access TOML, JSON, and XML files and SQLite, PostgreSQL, and Redis databases
  • Develop a RESTful web service using JSON payloads
  • Create a web application using HTML templates and JavaScript and a frontend web application or web game using WebAssembly
  • Build desktop 2D games
  • Develop an interpreter and a compiler for a programming language
  • Create a machine language emulator
  • Extend the Linux Kernel with loadable modules

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Jun 19, 2020
Length: 404 pages
Edition : 1st
Language : English
ISBN-13 : 9781789346220
Category :
Languages :

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing

Product Details

Publication date : Jun 19, 2020
Length: 404 pages
Edition : 1st
Language : English
ISBN-13 : 9781789346220
Category :
Languages :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total $ 147.97
Mastering Rust
$54.99
Rust Programming Cookbook
$43.99
Creative Projects for Rust Programmers
$48.99
Total $ 147.97 Stars icon
Banner background image

Table of Contents

13 Chapters
Rust 2018: Productivity Chevron down icon Chevron up icon
Storing and Retrieving Data Chevron down icon Chevron up icon
Creating a REST Web Service Chevron down icon Chevron up icon
Creating a Full Server-Side Web App Chevron down icon Chevron up icon
Creating a Client-Side WebAssembly App Using Yew Chevron down icon Chevron up icon
Creating a WebAssembly Game Using Quicksilver Chevron down icon Chevron up icon
Creating a Desktop Two-Dimensional Game Using ggez Chevron down icon Chevron up icon
Using a Parser Combinator for Interpreting and Compiling Chevron down icon Chevron up icon
Creating a Computer Emulator Using Nom Chevron down icon Chevron up icon
Creating a Linux Kernel Module Chevron down icon Chevron up icon
The Future of Rust Chevron down icon Chevron up icon
Assessments Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.3
(4 Ratings)
5 star 50%
4 star 25%
3 star 25%
2 star 0%
1 star 0%
elliottwins Aug 30, 2020
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This book does a wonderful job at introducing the reader to various crates for building numerous projects across various domains in rust. If you're stuck wondering how to get started with concepts such as creating a REST HTTP web server, building your first 2D game or even a Linux kernel module, then this book will guide you to doing so through commonly used crates in Rust.Each chapter is laid out consistently, with each being responsible for a single project (barring the introduction and the final "Future of rust" chapter) with a technical requirements and a "project overview" to give an aim for what is to be built. The chapter then proceeds to go over the various implementation aspects required in each project and finishes off with a summary and questions section, to help the reader improve upon their own knowledge and understanding of the concepts visited within the chapter.The book is geared towards persons who have a working understanding of Rust, and who are looking to cut their teeth by building projects, and does a brilliant job of helping the reader get started.
Amazon Verified review Amazon
Amazon Kunde Aug 07, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Love this book overall.One of the gems is for me chaper 8 "Using a Parser Compinator for Interpreting and Compiling" .. special named here the usage and insights in the NOM library, and the straight !! examples.Helped me to get in the right direction. ThanksI see forward the next expanded !!!!, I hope so ;-), version of this book
Amazon Verified review Amazon
andrew m johnson Oct 14, 2020
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
The creative projects introduced in the first three or four chapters were not very interesting to me. However, the book took several chapters to warm up and by the end I was very pleased with the pace and flavor of the projectsintroduced.Overall, this book portrays a well balanced slice-of-life of what it means to program in Rust from the year it was written. I will gladly hold on to my copy of the book for reference back to some of the more advanced concepts introduced in the book.
Amazon Verified review Amazon
N/A May 15, 2024
Full star icon Full star icon Full star icon Empty star icon Empty star icon 3
The code in the book is not updated, and a lot of examples don't work. Also, I have to login each day and search for the place where I left yesterday; it's too annoying.
Feefo Verified review Feefo
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is included in a Packt subscription? Chevron down icon Chevron up icon

A subscription provides you with full access to view all Packt and licnesed content online, this includes exclusive access to Early Access titles. Depending on the tier chosen you can also earn credits and discounts to use for owning content

How can I cancel my subscription? Chevron down icon Chevron up icon

To cancel your subscription with us simply go to the account page - found in the top right of the page or at https://subscription.packtpub.com/my-account/subscription - From here you will see the ‘cancel subscription’ button in the grey box with your subscription information in.

What are credits? Chevron down icon Chevron up icon

Credits can be earned from reading 40 section of any title within the payment cycle - a month starting from the day of subscription payment. You also earn a Credit every month if you subscribe to our annual or 18 month plans. Credits can be used to buy books DRM free, the same way that you would pay for a book. Your credits can be found in the subscription homepage - subscription.packtpub.com - clicking on ‘the my’ library dropdown and selecting ‘credits’.

What happens if an Early Access Course is cancelled? Chevron down icon Chevron up icon

Projects are rarely cancelled, but sometimes it's unavoidable. If an Early Access course is cancelled or excessively delayed, you can exchange your purchase for another course. For further details, please contact us here.

Where can I send feedback about an Early Access title? Chevron down icon Chevron up icon

If you have any feedback about the product you're reading, or Early Access in general, then please fill out a contact form here and we'll make sure the feedback gets to the right team. 

Can I download the code files for Early Access titles? Chevron down icon Chevron up icon

We try to ensure that all books in Early Access have code available to use, download, and fork on GitHub. This helps us be more agile in the development of the book, and helps keep the often changing code base of new versions and new technologies as up to date as possible. Unfortunately, however, there will be rare cases when it is not possible for us to have downloadable code samples available until publication.

When we publish the book, the code files will also be available to download from the Packt website.

How accurate is the publication date? Chevron down icon Chevron up icon

The publication date is as accurate as we can be at any point in the project. Unfortunately, delays can happen. Often those delays are out of our control, such as changes to the technology code base or delays in the tech release. We do our best to give you an accurate estimate of the publication date at any given time, and as more chapters are delivered, the more accurate the delivery date will become.

How will I know when new chapters are ready? Chevron down icon Chevron up icon

We'll let you know every time there has been an update to a course that you've bought in Early Access. You'll get an email to let you know there has been a new chapter, or a change to a previous chapter. The new chapters are automatically added to your account, so you can also check back there any time you're ready and download or read them online.

I am a Packt subscriber, do I get Early Access? Chevron down icon Chevron up icon

Yes, all Early Access content is fully available through your subscription. You will need to have a paid for or active trial subscription in order to access all titles.

How is Early Access delivered? Chevron down icon Chevron up icon

Early Access is currently only available as a PDF or through our online reader. As we make changes or add new chapters, the files in your Packt account will be updated so you can download them again or view them online immediately.

How do I buy Early Access content? Chevron down icon Chevron up icon

Early Access is a way of us getting our content to you quicker, but the method of buying the Early Access course is still the same. Just find the course you want to buy, go through the check-out steps, and you’ll get a confirmation email from us with information and a link to the relevant Early Access courses.

What is Early Access? Chevron down icon Chevron up icon

Keeping up to date with the latest technology is difficult; new versions, new frameworks, new techniques. This feature gives you a head-start to our content, as it's being created. With Early Access you'll receive each chapter as it's written, and get regular updates throughout the product's development, as well as the final course as soon as it's ready.We created Early Access as a means of giving you the information you need, as soon as it's available. As we go through the process of developing a course, 99% of it can be ready but we can't publish until that last 1% falls in to place. Early Access helps to unlock the potential of our content early, to help you start your learning when you need it most. You not only get access to every chapter as it's delivered, edited, and updated, but you'll also get the finalized, DRM-free product to download in any format you want when it's published. As a member of Packt, you'll also be eligible for our exclusive offers, including a free course every day, and discounts on new and popular titles.