Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Linux System Programming Techniques
Linux System Programming Techniques

Linux System Programming Techniques: Become a proficient Linux system programmer using expert recipes and techniques

Arrow left icon
Profile Icon Jack-Benny Persson
Arrow right icon
$19.99 per month
Full star icon Full star icon Full star icon Full star icon Half star icon 4.8 (8 Ratings)
Paperback May 2021 432 pages 1st Edition
eBook
$24.99 $35.99
Paperback
$43.99
Subscription
Free Trial
Renews at $19.99p/m
Arrow left icon
Profile Icon Jack-Benny Persson
Arrow right icon
$19.99 per month
Full star icon Full star icon Full star icon Full star icon Half star icon 4.8 (8 Ratings)
Paperback May 2021 432 pages 1st Edition
eBook
$24.99 $35.99
Paperback
$43.99
Subscription
Free Trial
Renews at $19.99p/m
eBook
$24.99 $35.99
Paperback
$43.99
Subscription
Free Trial
Renews at $19.99p/m

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing
Table of content icon View table of contents Preview book icon Preview Book

Linux System Programming Techniques

Chapter 2: Making Your Programs Easy to Script

Linux and other Unix systems have strong scripting support. The whole idea of Unix, from the very beginning, was to make a system easy to develop on. One of these features is to take the output of one program and make it the input of another program—hence building new tools with existing programs. We should always keep this in mind when creating programs for Linux. The Unix philosophy is to make small programs that do one thing only—and do it well. By having many small programs that do only one thing, we can freely choose how to combine them. And by combining small programs, we can write shell scripts—a common task in Unix and Linux.

This chapter will teach us how to make programs that are easy to script and easy to interact with other programs. That way, other people will find them much more useful. It's even likely they will find new ways of using our programs that we haven't even thought of, making the programs more popular and easier to use.

In this chapter, we will cover the following recipes:

  • Return values and how to read them
  • Exiting a program with a relevant return value
  • Redirecting stdin, stdout, and stderr
  • Connecting programs using pipes
  • Writing to stdout and stderr
  • Reading from stdin
  • Writing a pipe-friendly program
  • Redirecting the result to file
  • Reading environment variables

Let's get started!

Technical requirements

All you need for this chapter is a Linux computer with GCC and Make installed, preferably via one of the meta-packages or group installs mentioned in Chapter 1, Getting the Necessary Tools and Writing Our First Linux Programs. It's also preferable if you use the Bash shell for optimal compatibility. Most of the examples will work with other shells as well, but there's no guarantee that everything will work the same way on every possible shell out there. You can check which shell you are using by running echo $SHELL in your terminal. If you are using Bash, it will say /bin/bash.

You can download all the code for this chapter from https://github.com/PacktPublishing/Linux-System-Programming-Techniques/tree/master/ch2.

Check out the following link to see the Code in Action video: https://bit.ly/3u5VItw

Return values and how to read them

Return values are a big deal in Linux and other Unix and Unix-like systems. They are a big deal in C programming as well. Most functions in C return some value with return. It's that same return statement we use to return a value from main() to the shell. The original Unix operating system and the C programming language came around at the same time and from the same place. As soon as the C language was completed in the early 1970s, Unix was rewritten in C. Previously, it was written in assembler only. And hence, C and Unix fit together tightly.

The reason why return values are so crucial in Linux is that we can build shell scripts. Those shell scripts use other programs and, hopefully, our programs, as its parts. For the shell script to be able to check whether a program has succeeded or not, it reads the return value of that program.

In this recipe, we will write a program that tells the user if a file or directory exists or not.

Getting ready

It's recommended that you use Bash for this recipe. I can't guarantee compatibility with other shells.

How to do it…

In this recipe, we will write a small shell script that demonstrates the purpose of the return values, how to read them, and how to interpret them. Let's get started:

  1. Before we write the code, we must investigate what return values the program uses that we will use in our script. Execute the following commands, and make a note of the return values we get. The test command is a small utility that tests certain conditions. In this example, we'll use it to determine if a file or directory exists. The -e option stands for exists. The test command doesn't give us any output; it just exits with a return value:
    $> test -e /
    $> echo $?
    0
    $> test -e /asdfasdf
    $> echo $?
    1
  2. Now that we know what return values the test program gives us (0 when the file or directory exists, otherwise 1), we can move on and write our script. Write the following code in a file and save it as exist.sh. You can also download it from https://github.com/PacktPublishing/Linux-System-Programming-Techniques/blob/master/ch2/exist.sh. The shell script uses the test command to determine whether the specified file or directory exists:
    #!/bin/bash 
    # Check if the user supplied exactly one argument 
    if [ "$#" -ne 1 ]; then 
        echo "You must supply exactly one argument." 
        echo "Example: $0 /etc" 
        exit 1 # Return with value 1 
    fi 
    # Check if the file/directory exists 
    test -e "$1" # Perform the actual test
    if [ "$?" -eq 0 ]; then 
        echo "File or directory exists" 
    elif [ "$?" -eq 1 ]; then 
        echo "File or directory does not exist" 
        exit 3 # Return with a special code so other
               # programs can use the value to see if a 
               # file dosen't exist
    else 
        echo "Unknown return value from test..."
        exit 1 # Unknown error occured, so exit with 1
    fi 
    exit 0 # If the file or directory exists, we exit 
           # with 
  3. Then, you need to make it executable with the following command:
    $> chmod +x exist.sh
  4. Now, it's time to try out our script. We try it with directories that do exist and with those that don't. We also check the exit code after each run:
    $> ./exist.sh  
    You must supply exactly one argument. 
    Example: ./exist.sh /etc 
    $> echo $?
    1
    $> ./exist.sh /etc 
    File or directory exists 
    $> echo $?
    0
    $> ./exist.sh /asdfasdf 
    File or directory does not exist
    $> echo $?
    3
  5. Now that we know that it's working and leaving the correct exit codes, we can write one-liners to use our script together with, for example, echo to print a text stating whether the file or directory exists:
    $> ./exist.sh / && echo "Nice, that one exists"
    File or directory exists
    Nice, that one exists
    $> ./exist.sh /asdf && echo "Nice, that one exists"
    File or directory does not exist
  6. We can also write a more complicated one-liner—one that takes advantage of the unique error code 3 we assigned to "file not found" in our script. Note that you shouldn't type > at the start of the second line. This character is automatically inserted by the shell when you end the first line with a backslash to indicate the continuation of a long line:
    $> ./exist.sh /asdf &> /dev/null; \
    > if [ $? -eq 3 ]; then echo "That doesn't exist"; fi
    That doesn't exist

How it works…

The test program is a small utility designed to test files and directories, compare 
values, and so on. In our case, we used it to test if the specified file or directory exists (-e for exist).

The test program doesn't print anything; it just exits in silence. It does, however, leave a return value. It is that return value that we check with the $? variable. It's also the very same variable we check in the script's if statements.

There are some other special variables in the script that we used. The first one was $#, which contains the number of arguments passed to the script. It works like argc in C. At the very start of the script, we compared if $# is not equal to 1 (-ne stands for not equal). If $# is not equal to 1, an error message is printed and the script aborts with code 1.

The reason for putting $# inside quotes is just a safety mechanism. If, in some unforeseen event, $# were to contain spaces, we still want the content to be evaluated as a single value, not two. The same thing goes for the quotes around the other variables in the script.

The next special variable is $0. This variable contains argument 0, which is the name of the program, just as with argv[0] in C, as we saw in Chapter 1, Getting the Necessary Tools and Writing Our First Linux Programs.

The first argument to the program is stored in $1, as shown in the test case. The first argument in our case is the supplied filename or directory that we want to test.

Like our C programs, we want our scripts to exit with a relevant return value (or exit code, as it is also called). We use exit to leave the script and set a return value. In case the user doesn't supply precisely one argument, we exit with code 1, a general error code. And if the script is executed as it should, and the file or directory exists, we exit with code 0. If the script is executed as it should, but the file or directory doesn't exist, we exit with code 3, which isn't reserved for a particular use, but still indicates an error (all non-zero codes are error codes). This way, other scripts can fetch the return value of our script and act upon it.

In Step 5, we did just that—act upon the exit code from our script with the following command:

$> ./exist.sh / && echo "Nice, that one exists"

&& means "and". We can read the whole line as an if statement. If exist.sh is true—that is, exit code 0—then execute the echo command. If the exit code is anything other than 0, then the echo command is never executed.

In Step 6, we redirected all the output from the script to /dev/null and then used a complete if statement to check for error code 3. If error code 3 is encountered, we print a message with echo.

There's more…

There are a lot more tests and comparisons we can do with the test program. They are all listed in the manual; that is, man 1 test.

If you are unfamiliar with Bash and shell scripting, there is a lot of useful information in the manual page, man 1 bash.

The opposite of && is || and is pronounced "or." So, the opposite of what we did in this recipe would be as follows:

$> ./exist.sh / || echo "That doesn't exist"
File or directory exists
$> ./exist.sh /asdf || echo "That doesn't exist"
File or directory does not exist
That doesn't exist

See also

If you want to dig deep into the world of Bash and shell scripting, there is an excellent guide at The Linux Documentation Project: https://tldp.org/LDP/Bash-Beginners-Guide/html/index.html.

Exiting a program with a relevant return value

In this recipe, we'll learn how to exit a C program with a relevant return value. We will look at two different ways to exit a program with a return value and how return fits together with the system from a broader perspective. We will also learn what some common return values mean.

Getting ready

For this recipe, we only need the GCC compiler and the Make tool.

How to do it…

We will write two different versions of a program here to show you two different methods of exiting. Let's get started:

  1. We'll start by writing the first version using return, which we have seen previously. But this time, we will use it to return from functions, all the way back to main() and eventually the parent process, which is the shell. Save the following program in a file called functions_ver1.c. All the return statements are highlighted in the following code:
    #include <stdio.h>
    int func1(void);
    int func2(void);
    int main(int argc, char *argv[])
    {
       printf("Inside main\n");
       printf("Calling function one\n");
       if (func1())
       {
          printf("Everything ok from function one\n");
          printf("Return with 0 from main - all ok\n");
          return 0;
       }
       else
       {
          printf("Caught an error from function one\n");
          printf("Return with 1 from main - error\n");
          return 1;
       }
       return 0; /* We shouldn't reach this, but 
                    just in case */
    }
    int func1(void)
    {
       printf("Inside function one\n");
       printf("Calling function two\n");
       if (func2())
       {
          printf("Everything ok from function two\n");
          return 1;
       }
       else
       {
          printf("Caught an error from function two\n");
          return 0;
       }
    }
    int func2(void)
    {
       printf("Inside function two\n");
       printf("Returning with 0 (error) from "
          "function two\n");
       return 0;
    }
  2. Now, compile it:
    $> gcc functions_ver1.c -o functions_ver1
  3. Then, run it. Try to follow along and see which functions call and return to which other functions:
    $> ./functions-ver1
    Inside main 
    Calling function one 
    Inside function one 
    Calling function two 
    Inside function two 
    Returning with 0 (error) from function two 
    Caught an error from function two 
    Caught an error from function one 
    Return with 1 from main – error
  4. Check the return value:
    $> echo $?
    1
  5. Now, we rewrite the preceding program to use exit() inside the functions instead. What will happen then is that as soon as exit() is called, the program will exit with the specified value. If exit() is called inside another function, that function will not return to main() first. Save the following program in a new file as functions_ver2.c. All the return and exit statements are highlighted in the following code:
    #include <stdio.h>
    #include <stdlib.h>
    int func1(void);
    int func2(void);
    int main(int argc, char *argv[])
    {
       printf("Inside main\n");
       printf("Calling function one\n");
       if (func1())
       {
          printf("Everything ok from function one\n");
          printf("Return with 0 from main - all ok\n");
          return 0;
       }
       else
       {
          printf("Caught an error from funtcion one\n");
          printf("Return with 1 from main - error\n");
          return 1;
       }
       return 0; /* We shouldn't reach this, but just 
                    in case */
    }
    int func1(void)
    {
       printf("Inside function one\n");
       printf("Calling function two\n");
       if (func2())
       {
          printf("Everything ok from function two\n");
          exit(0);
       }
       else
       {
          printf("Caught an error from function two\n");
          exit(1);
       }
    }
  6. Now, compile this version:
    $> gcc functions_ver2.c -o functions_ver2
  7. Then, run it and see what happens (and compare the output from the previous program):
    $> ./functions_ver2
    Inside main
    Calling function one
    Inside function one
    Calling function two
    Inside function two
    Returning with (error) from function two
  8. Finally, check the return value:
    $> echo $?
    1

How it works…

Notice that in C, 0 is regarded as false or error, while anything else is considered to be true (or correct). This is the opposite of the return values to the shell. This can be a bit confusing at first. However, as far as the shell is concerned, 0 is "all ok," while anything else indicates an error.

The difference between the two versions is how the functions and the entire program returns. In the first version, each function returns to the calling function—in the order they were called. In the second version, each function exits with the exit() function. This means that the program will exit directly and return the specified value to the shell. The second version isn't good practice; it's much better to return to the calling function. If someone else were to use your function in another program, and it suddenly exits the entire program, that would be a big surprise. That's not usually how we do it. However, I wanted to demonstrate the difference between exit() and return here.

I also wanted to demonstrate another point. Just as a function returns to its calling function with return, a program returns to its parent process (usually the shell) in the same way. So, in a way, programs in Linux are treated as functions in a program.

The following diagram shows how Bash calls the program (the upper arrow), which then starts in main(), which then calls the next function (the arrows to the right), and so on. The arrows returning on the left show how each function returns to the calling function, and then finally to Bash:

Figure 2.1 – Calling and returning

There's more…

There are a lot more return codes we can use. The most common ones are the ones we've seen here; 0 for ok and 1 for error. However, all other codes except 0 mean some form of error. Code 1 is a general error, while the other error codes are more specific. There isn't exactly a standard, but there are some commonly used codes. Some of the most common codes are as follows:

Figure 2.2 – Common error codes in Linux and other UNIX-like systems

Except for these codes, there are some additional ones listed at the end of /usr/include/sysexit.h. The codes listed in that file range from 64 to 78 and address errors such as data format error, service unavailable, I/O errors, and more.

Redirecting stdin, stdout, and stderr

In this recipe, we will learn how to redirect standard input, standard output, and standard error to and from files. Redirecting data to and from files is one of the basic principles of Linux and other Unix systems.

stdin is the shorthand word for standard input. stdout and stderr are the shorthand words for standard output and standard error, respectively.

Getting ready

It's best if we use the Bash shell for this recipe for compatibility purposes.

How to do it…

To get the hang of redirections, we will be performing a bunch of experiments here. We are really going to twist and turn the redirections and see stdout, stderr, and stdin operate in all kinds of ways. Let's get started:

  1. Let's start by saving a list of the files and directories in the top root directory. We can do this by redirecting standard output (stdout) from the ls command into a file:
    $> cd
    $> ls / > root-directory.txt
  2. Now, take a look at the file with cat:
    $> cat root-directory.txt
  3. Now, let's try the wc command to count lines, words, and characters. Remember to press Ctrl + D when you have finished typing in the message:
    $> wc
    hello,
    how are you?
    Ctrl+D
         2       4      20
  4. Now that we know how wc works, we can redirect its input to come from a file instead—the file we created with the file listing:
    $> wc < root-directory.txt
    29  29 177
  5. What about standard error? Standard error is its own output stream, separated from standard output. If we redirect standard output and generate an error, we will still see the error message on the screen. Let's try it out:
    $> ls /asdfasdf > non-existent.txt
    ls: cannot access '/asdfasdf': No such file or directory
  6. Just like standard output, we can redirect standard error. Notice that we don't get any error message here:
    $> ls /asdfasdf 2> errors.txt
  7. The error messages are saved in errors.txt:
    $> cat errors.txt
    ls: cannot access '/asdfasdf': No such file or directory
  8. We can even redirect standard output and standard error at the same time, to different files:
    $> ls /asdfasdf > root-directory.txt 2> errors.txt
  9. We can also redirect standard output and error into the same file for convenience:
    $> ls /asdfasdf &> all-output.txt
  10. We can even redirect all three (stdin, stdout, and stderr) at the same time:
    $> wc < all-output.txt > wc-output.txt 2> \
    > wc-errors.txt
  11. We can also write to standard error from the shell to write error messages of our own:
    $> echo hello > /dev/stderr
    hello
  12. Another way of printing a message to stderr from Bash is like this:
    $> echo hello 1>&2
    hello
  13. However, this doesn't prove that our hello message got printed to standard error. We can prove this by redirecting the standard output to a file. If we still see the error message, then it's printed on standard error. When we do this, we need to wrap the first statement in parenthesis to separate it from the last redirect:
    $> (echo hello > /dev/stderr) > hello.txt
    hello
    $> (echo hello 1>&2) > hello.txt
    hello
  14. Stdin, stdout, and stderr are represented by files in the /dev directory. This means we can even redirect stdin from a file. This experiment doesn't do anything useful—we could have just typed wc, but it proves a point:
    $> wc < /dev/stdin
    hello, world!
    Ctrl+D
         1       2      14
  15. All of this means that we can even redirect a standard error message back to standard output:
    $> (ls /asdfasdf 2> /dev/stdout) > \ 
    > error-msg-from-stdout.txt
    $> cat error-msg-from-stdout.txt 
    ls: cannot access '/asdfasdf': No such file or directory

How it works…

Standard output, or stdout, is where all the normal output from programs gets printed. Stdout is also referred to as file descriptor 1.

Standard error, or stderr, is where all error messages get printed. Stderr is also referred to as file descriptor 2. That is why we used 2> when we redirected stderr to a file. If we wanted to, for clarity, we could have redirected stdout as 1> instead of just >. But the default redirection with > is stdout, so there is no need to do this.

When we redirected both stdout and stderr in Step 9, we used an & sign. This reads as "stdout and stderr".

Standard input, or stdin, is where all input data is read from. Stdin is also referred to as file descriptor 0. Stdin redirects with a <, but just as with stdout and stderr, we can also write it as 0<.

The reason for separating the two outputs, stdout and stderr, is so that when we redirect the output from a program to a file, we should still be able to see the error message on the screen. We also don't want the file to be cluttered with error messages.

Having separate outputs also makes it possible to have one file for the actual output, and another one as a log file for error messages. This is especially handy in scripts.

You might have heard the phrase "Everything in Linux is either a file or a process". That saying is true. There is no other thing in Linux, except for files or processes. Our experiments with /dev/stdout, /dev/stderr, and /dev/stdin proved this. Files represent even the input and output of programs.

In Step 11, we redirected the output to the /dev/stderr file, which is standard error. The message, therefore, got printed on standard error.

In Step 12, we pretty much did the same thing but without using the actual device file. The funny-looking 1>&2 redirection reads as "send standard output to standard error".

There's more…

Instead of using /dev/stderr, for example, we could have used /dev/fd/2, where fd stands for file descriptor. The same goes for stdout, which is /dev/fd/1, and stdin, which is /dev/fd/0. So, for example, the following will print the list to stderr:

$> ls / > /dev/fd/2

Just like we can send standard output to standard error with 1>&2, we can do the opposite with 2>&1, which means we can send standard error to standard output.

Connecting programs using pipes

In this recipe, we'll learn how to use pipes to connect programs. When we write our C programs, we always want to strive to make them easy to pipe together with other programs. That way, our programs will be much more useful. Sometimes, programs that are connected with pipes are called filters. The reason for this is that, often, when we connect programs with pipes, it is to filter or transform some data.

Getting ready

Just as in the previous recipe, it's recommended that we use the Bash shell.

How to do it…

Follow these steps to explore pipes in Linux:

  1. We are already familiar with wc and ls from the previous recipe. Here, we will use them together with a pipe to count the number of files and directories in the root directory of the system. The pipe is the vertical line symbol:
    $> ls / | wc -l
    29
  2. Let's make things a bit more interesting. This time, we want to list only symbolic links in the root directory (by using two programs with a pipe). The result will differ from system to system:
    $> ls -l / | grep lrwx
    lrwxrwxrwx   1 root root    31 okt 21 06:53 initrd.img -> boot/initrd.img-4.19.0-12-amd64
    lrwxrwxrwx   1 root root    31 okt 21 06:53 initrd.img.old -> boot/initrd.img-4.19.0-11-amd64
    lrwxrwxrwx   1 root root    28 okt 21 06:53 vmlinuz -> boot/vmlinuz-4.19.0-12-amd64
    lrwxrwxrwx   1 root root    28 okt 21 06:53 vmlinuz.old -> boot/vmlinuz-4.19.0-11-amd64
  3. Now, we only want the actual filenames, not the information about them. So, this time, we will add another program at the end called awk. In this example, we are telling awk to print the ninth field. One or more whitespaces separate each field:
    $> ls -l / | grep lrwx | awk '{ print $9 }'
    initrd.img
    initrd.img.old
    vmlinuz
    vmlinuz.old
  4. We can add another "filter", one that adds some text in front of every link. This can be accomplished using seds means substitute. Then, we can tell sed that we want to substitute the start of the line (^) with the text This is a link::
    $> ls -l / | grep lrwx | awk '{ print $9 }' \
    > | sed 's/^/This is a link: /'
    This is a link: initrd.img
    This is a link: initrd.img.old
    This is a link: vmlinuz
    This is a link: vmlinuz.old

How it works…

A lot of things are going on here, but don't feel discouraged if you don't get it all. The importance of this recipe is to demonstrate how to use a pipe (the vertical line symbol, |).

In the very first step, we counted the number of files and directories in the root of the filesystem using wc. When we run ls interactively, we get a nice-looking list that spans the width of our terminal. The output is also most likely color-coded. But when we run ls by redirecting its output through a pipe, ls doesn't have a real terminal to output to, so it falls back to outputting the text one file or directory per line, without any colors. You can try this yourself if you like by running the following:

$> ls / | cat

Since ls it outputting one file or directory per line, we can count the number of lines with wc (the -l option).

In the next step (Step 2), we used grep to only list links from the output of ls -l. Links in the output from ls -l start with the letter l at the start of the line. After that is the access rights, which for links is rwx for everyone. This is what we search for with lrwx with grep.

Then, we only wanted the actual filenames, so we added a program called awk. The awk tool lets us single out a particular column or field in the output. We singled out the ninth column ($9), which is the filename.

By running the output from ls through two other tools, we created a list of only the links in the root directory.

In Step 3, we added another tool, or filter as it sometimes called. This tool is sed, a stream editor. With this program, we can make changes to the text. In this case, we added the text This is a link: in front of every link. The following is a short explanation of the line:

sed 's/^/This is a link: /'

s means "substitute"; that is, we wish to modify some text. Inside the two first slashes (/) is the text or expressions that should match what we want to modify. Here, we have the beginning of the line, ^. Then, after the second slash, we have the text that we want to replace the matched text with, up until the final slash. Here, we have the text This is a link:.

There's more…

Beware of unnecessary piping; it's easy to get caught up in endless piping. One silly—but instructive—example is this:

$> ls / | cat | grep tmp
tmp

We could leave out cat and still get the same result:

$> ls / | grep tmp
tmp

The same goes for this one (which I am guilty of myself from time to time):

$> cat /etc/passwd | grep root
root:x:0:0:root:/root:/bin/bash

There is no reason to pipe the previous example at all. The grep utility can take a filename argument, like so:

$> grep root /etc/passwd
root:x:0:0:root:/root:/bin/bash

See also

For anyone interested in the history of Unix and how far back pipes go, there is an exciting video from 1982 on YouTube, uploaded by AT&T: https://www.youtube.com/watch?v=tc4ROCJYbm0.

Writing to stdout and stderr

In this recipe, we'll learn how to print text to both stdout and stderr in a C program. In the two previous recipes, we learned what stdout and stderr are, why they exist, and how to redirect them. Now, it's our turn to write correct programs that output error messages on standard error, and regular messages on standard output.

How to do it…

Follow these steps to learn how to write output to both stdout and stderr in a C program:

  1. Write the following code in a file called output.c and save it. In this program, we will write output using three different functions: printf(), fprintf(), and dprintf(). With fprintf(), we can specify a file stream such as stdout or stderr, while with dprintf(), we can specify the file descriptor (1 for stdout and 2 for stderr, just as we have seen previously):
    #define _POSIX_C_SOURCE 200809L
    #include <stdio.h>
    int main(void)
    {
       printf("A regular message on stdout\n");
       /* Using streams with fprintf() */
       fprintf(stdout, "Also a regular message on " 
         	 "stdout\n");
       fprintf(stderr, "An error message on stderr\n");
       /* Using file descriptors with dprintf().
        * This requires _POSIX_C_SOURCE 200809L 
        * (man 3 dprintf)*/
       dprintf(1, "A regular message, printed to "
          	  "fd 1\n");
       dprintf(2, "An error message, printed to "
          	   "fd 2\n");
       return 0;
    }
  2. Compile the program:
    $> gcc output.c -o output
  3. Run the program like you usually would:
    $> ./output 
    A regular message on stdout
    Also a regular message on stdout
    An error message on stderr
    A regular message, printed to fd 1
    An error message, printed to fd 2
  4. To prove that the regular messages are printed to stdout, we can send the error messages to /dev/null, a black hole in the Linux system. Doing this will only display the messages printed to stdout:
    $> ./output 2> /dev/null 
    A regular message on stdout
    Also a regular message on stdout
    A regular message, printed to fd 1
  5. Now, we will do the reverse; we will send the messages printed to stdout to /dev/null, showing only the error messages that are printed to stderr:
    $> ./output > /dev/null
    An error message on stderr
    An error message, printed to fd 2
  6. Finally, let's send all messages, from both stdout and stderr, to /dev/null. This will display nothing:
    $> ./output &> /dev/null

How it works…

The first example, where we used printf(), doesn't contain anything new or unique. All output printed with the regular printf() function is printed to stdout.

Then, we saw some new examples, including the two lines where we use fprintf(). That function, fprintf(), allows us to specify a file stream to print the text to. We will cover what a stream is later on in this book. But in short, a file stream is what we usually open when we want to read or write to a file in C using the standard library. And remember, everything is either a file or a process in Linux. When a program opens in Linux, three file streams are automatically opened—stdin, stdout, and stderr (assuming the program has included stdio.h).

Then, we looked at some examples of using dprintf(). This function allows us to specify a file descriptor to print to. We covered file descriptors in the previous recipes of this chapter, but we will discuss them in more depth later in this book. Three file descriptors are always open—0 (stdin), 1 (stdout), and 2 (stderr)—in every program we write on Linux. Here, we printed the regular message to file descriptor (fd for short) 1, and the error message to file descriptor 2.

To be correct in our code, we need to include the very first line (the #define line) for the sake of dprintf(). We can read all about it in the manual page (man 3 dprintf), under Feature Test Macro Requirements. The macro we define, _POSIX_C_SOURCE, is for POSIX standards and compatibility. We will cover this in more depth later in this book.

When we tested the program, we verified that the regular messages got printed to standard output by redirecting the error messages to a file called /dev/null, showing only the messages printed to standard output. Then, we did the reverse to verify that the error messages got printed to standard error.

The special file, /dev/null, acts as a black hole in Linux and other Unix systems. Everything we send to that file simply disappears. Try it out with ls / &> /dev/null, for example. No output will be displayed since everything is redirected to the black hole.

There's more…

I mentioned that three file streams are opened in a program, assuming it includes stdio.h, as well as three file descriptors. These three file descriptors are always opened, even if stdio.h is not included. If we were to include unistd.h, we could also use macro names for the three file descriptors.

The following table shows these file descriptors, their macro names, and file streams, which are handy for future reference:

Figure 2.3 – File descriptors and file streams in Linux

Figure 2.3 – File descriptors and file streams in Linux

Reading from stdin

In this recipe, we'll learn how to write a program in C that reads from standard input. Doing so enables your programs to take input from other programs via a pipe, making them easier to use as a filter, thus making them more useful in the long run.

Getting ready

You'll need the GCC compiler and preferably the Bash shell for this recipe, although it should work with any shell.

To fully understand the program that we are about to write, you should look at an ASCII table, an example of which can be found at the following URL: https://github.com/PacktPublishing/Linux-System-Programming-Techniques/blob/master/ch2/ascii-table.md.

How to do it…

In this recipe, we will write a program that takes single words as input, converts their cases (uppercase into lower and lowercase into upper), and prints the result to standard output. Let's get started:

  1. Write the following code into a file and save it as case-changer.c. In this program, we use fgets() to read characters from stdin. We then use a for loop to loop over the input, character by character. Before we start the next loop with the next line of input, we must zero out the arrays using memset():
    #include <stdio.h>
    #include <string.h>
    int main(void)
    {
        char c[20] = { 0 };
        char newcase[20] = { 0 };
        int i;
        while(fgets(c, sizeof(c), stdin) != NULL)
        {
            for(i=0; i<=sizeof(c); i++)
            {
                /* Upper case to lower case */
                if ( (c[i] >= 65) && (c[i] <= 90) )
                {
                    newcase[i] = c[i] + 32;
                }
                /* Lower case to upper case */
                if ( (c[i] >= 97 && c[i] <= 122) )
                {
                    newcase[i] = c[i] - 32;
                }
            }
            printf("%s\n", newcase);
            /* zero out the arrays so there are no
               left-overs in the next run */
            memset(c, 0, sizeof(c));
            memset(newcase, 0, sizeof(newcase));
        }
        return 0;
    }
  2. Compile the program:
    $> gcc case-changer.c -o case-changer
  3. Try it out by typing some words in it. Quit the program by pressing Ctrl + D:
    $> ./case-changer
    hello
    HELLO
    AbCdEf
    aBcDeF
  4. Now, try to pipe some input to it, for example, the first five lines from ls:
    $> ls / | head -n 5 | ./case-changer
    BIN
    BOOT
    DEV
    ETC
    HOME
  5. Let's try to pipe some uppercase words into it from a manual page:
    $> man ls | egrep '^[A-Z]+$' | ./case-changer 
    name
    synopsis
    description
    author
    copyrigh

How it works…

First, we created two character arrays of 20 bytes each and initialize them to 0.

Then, we used fgets(), wrapped in a while loop, to read characters from standard input. The fgets() function reads characters until it reaches a newline character or an End Of File (EOF). The characters that are read are stored in the c array, and also returned.

To read more input—that is, more than one word—we continue reading input with the help of the while loop. The while loop won't finish until we either press Ctrl + D or the input stream is empty.

The fgets() function returns the character read on success and NULL on error or when an EOF occurs while no characters have been read (that is, no more input). Let's break down the fgets() function so that we can understand it better:

fgets(c, sizeof(c), stdin)

The first argument, c, is where we store the data. In this case, it's our character array.

The second argument, sizeof(c), is the maximum size we want to read. The fgets() function is safe here; it reads one less than the size we specify. In our case, it will only read 19 characters, leaving room for the null character.

The final and third argument, stdin, is the stream we want to read from—in our case, standard input.

Inside the while loop is where the case conversions are happening, character by character in the for loop. In the first if statement, we check if the current character is an uppercase one. If it is, then we add 32 to the character. For example, if the character is A, then it's represented by 65 in the ASCII table. When we add 32, we get 97, which is a. The same goes for the entire alphabet. It's always 32 characters apart between the uppercase and lowercase versions.

The next if statement does the reverse. If the character is a lowercase one, we subtract 32 and get the uppercase version.

Since we are only checking characters between 65 and 90, and 97 and 122, all other characters are ignored.

Once we printed the result on the screen, we reset the character arrays to all zeros with memset(). If we don't do this, we will have leftover characters in the next run.

Using the program

We tried the program by running it interactively and typing words into it. Each time we hit the Enter key, the word is transformed; the uppercase letters will become lowercase and vice versa.

Then, we piped data to it from the ls command. That output got converted into uppercase letters.

Then, we tried to pipe it uppercase words from the manual page (the headings). All the headings in a manual page are uppercase and start at the beginning of the line. This is what we "grep" for with egrep, and then pipe to our case-changer program.

There's more…

For more information about fgets(), see the manual page, man 3 fgets.

You can write a small program to print a minimum ASCII table for the letters a-z and A-Z. This small program also demonstrates that each character is represented by a number:

ascii-table.c

#include <stdio.h>
int main(void)
{
    char c;
    for (c = 65; c<=90; c++)
    {
        printf("%c = %d    ", c, c); /* upper case */
        printf("%c = %d\n", c+32, c+32); /* lower case */
    }
    return 0;
}

Writing a pipe-friendly program

In this recipe, we will learn how to write a program that is pipe-friendly. It will take input from standard input and output the result on standard output. Any error messages are going to be printed on standard error.

Getting ready

We'll need the GCC compiler, GNU Make, and preferably the Bash shell for this recipe.

How to do it…

In this recipe, we are going to write a program that converts miles per hour into kilometers per hour. As a test, we are going to pipe data to it from a text file that contains measurements from a car trial run with average speeds. The text file is in miles per hour (mph), but we want them in kilometers per hour (kph) instead. Let's get started:

  1. Start by creating the following text file or download it from GitHub from https://github.com/PacktPublishing/Linux-System-Programming-Techniques/blob/master/ch2/avg.txt. If you are creating it yourself, name it avg.txt. This text will be used as the input for a program we will write. The text simulates measurement values from a car trial run:
    10-minute average: 61 mph
    30-minute average: 55 mph
    45-minute average: 54 mph
    60-minute average: 52 mph
    90-minute average: 52 mph
    99-minute average: nn mph
  2. Now, create the actual program. Type in the following code and save it as mph-to-kph.c, or download it from GitHub from https://github.com/PacktPublishing/Linux-System-Programming-Techniques/blob/master/ch2/mph-to-kph.c. This program will convert miles per hour into kilometers per hour. This conversion is performed in the printf() statement:
    #include <stdio.h>
    #include <stdlib.h>
    #include <string.h>
    int main(void)
    {
        char mph[10] = { 0 };
        while(fgets(mph, sizeof(mph), stdin) != NULL)
        {
            /* Check if mph is numeric 
             * (and do conversion) */
            if( strspn(mph, "0123456789.-\n") == 
                strlen(mph) )
            {
                printf("%.1f\n", (atof(mph)*1.60934) );
            }
            /* If mph is NOT numeric, print error 
             * and return */
            else
            {
                fprintf(stderr, "Found non-numeric" 
                    " value\n");
                return 1;
            }
        }
        return 0;
    }
  3. Compile the program:
    $> gcc mph-to-kph.c -o mph-to-kph
  4. Test the program by running it interactively. Type in some miles per hour values and hit Enter after each value. The program will print out the corresponding value in kilometers per hour:
    $> ./mph-to-kph 
    50
    80.5
    60
    96.6
    100
    160.9
    hello
    Found non-numeric value
    $> echo $?
    1
    $> ./mph-to-kph
    50
    80.5
    Ctrl+D
    $> echo $?
    0
  5. Now, it's time to use our program as a filter to transform the table containing miles per hour into kilometers per hour. But first, we must filter out only the mph values. We can do this with awk:
    $> cat avg.txt | awk '{ print $3 }'
    61
    55
    54
    52
    52
    nn
  6. Now that we have a list of the numbers only, we can add our mph-to-kph program at the end to convert the values:
    $> cat avg.txt | awk '{ print $3 }' | ./mph-to-kph 
    98.2
    88.5
    86.9
    83.7
    83.7
    Found non-numeric value
  7. Since the last value is nn, a non-numeric value, which is an error in the measurement, we don't want to show the error message in the output. Therefore, we redirect stderr to /dev/null. Note the parenthesis around the expression, before the redirect:
    $> (cat avg.txt | awk '{ print $3 }' | \ 
    > ./mph-to-kph) 2> /dev/null
    98.2
    88.5
    86.9
    83.7
    83.7
  8. This is much prettier! However, we also want to add km/h at the end of every line to know what the value is. We can use sed to accomplish this:
    $> (cat avg.txt | awk '{ print $3 }' | \ 
    > ./mph-to-kph) 2> /dev/null | sed 's/$/ km\/h/'
    98.2 km/h
    88.5 km/h
    86.9 km/h
    83.7 km/h
    83.7 km/h

How it works…

This program is similar to the one from the previous recipe. The features we added here check if the input data is numeric or not, and if it isn't, the program aborts with an error message that is printed to stderr. The regular output is still printed to stdout, as far as it goes without an error.

The program is only printing the numeric values, no other information. This makes it better as a filter, since the km/h text can be added by the user with other programs. That way, the program can be useful for many more scenarios that we haven't thought about.

The line where we check for numeric input might require some explanation:

if( strspn(mph, "0123456789.-\n") == strlen(mph) )

The strspn() function only reads the characters that we specified in the second argument to the function and then returns the number of read characters. We can then compare the number of characters read by strspn() with the entire length of the string, which we get with strlen(). If those match, we know that every character is either numeric, a dot, a minus, or a newline. If they don't match, this means an illegal character was found in the string.

For strspn() and strlen() to work, we included string.h. For atof() to work, we included stdlib.h.

Piping data to the program

In Step 5, we selected only the third field—the mph value—using the awk program. The awk $3 variable means field number 3. Each field is a new word, separated by a space.

In Step 6, we redirected the output from the awk program—the mph values—into our mph-to-kph program. As a result, our program printed the km/h values on the screen.

In Step 7, we redirected the error messages to /dev/null so that the output from the program is clean.

Finally, in Step 8, we added the text km/h after the kph values in the output. We did this by using the sed program. The sed program can look a bit cryptic, so let's break it down:

sed 's/$/ km\/h/'

This sed script is similar to the previous ones we have seen. But this time, we substituted the end of the line with a $ sign instead of the beginning with ^. So, what we did here is substitute the end of the line with the text "km/h". Note, though, that we needed to escape the slash in "km/h" with a backslash.

There's more…

There's a lot of useful information about strlen() and strspn() in the respective manual pages. You can read them with man 3 strlen and man 3 strspn.

Redirecting the result to a file

In this recipe, we will learn how to redirect the output of a program to two different files. We are also going to learn some best practices when writing a filter, a program specifically made to be connected with other programs with a pipe.

The program we will build in this recipe is a new version of the program from the previous recipe. The mph-to-kph program in the previous recipe had one drawback: it always stopped when it found a non-numeric character. Often, when we run filters on long input data, we want the program to continue running, even if it has detected some erroneous data. This is what we are going to fix in this version.

We will keep the default behavior just as it was previously; that is, it will abort the program when it encounters a non-numeric value. However, we will add an option (-c) so that it can continue running the program even if a non-numeric value was detected. Then, it's up to the end user to decide how he or she wants to run it.

Getting ready

All the requirements listed in the Technical requirements section of this chapter apply here (the GCC compiler, the Make tool, and the Bash shell).

How to do it…

This program will be a bit longer, but if you like, you can download it from GitHub at https://github.com/PacktPublishing/Linux-System-Programming-Techniques/blob/master/ch2/mph-to-kph_v2.c. Since the code is a bit longer, I will be splitting it up into several steps. However, all of the code still goes into a single file called mph-to-kph_v2.c. Let's get started:

  1. Let's start with the feature macro and the required header files. Since we are going to use getopt(), we need the _XOPEN_SOURCE macro, as well as the unistd.h header file:
    #define _XOPEN_SOURCE 500
    #include <stdio.h>
    #include <stdlib.h>
    #include <string.h>
    #include <unistd.h
  2. Next, we will add the function prototype for the help function. We will also start writing the main() function body:
    void printHelp(FILE *stream, char progname[]);
    int main(int argc, char *argv[])
    {
       char mph[10] = { 0 };
       int opt;
       int cont = 0; 
  3. Then, we will add the getopt() function inside a while loop. This is similar to the Writing a program that parses command-line options recipe from Chapter 1, Getting the Necessary Tools and Writing Our First Linux Programs:
    /* Parse command-line options */    
       while ((opt = getopt(argc, argv, "ch")) != -1)
       {
          switch(opt)
          {
             case 'h':
                printHelp(stdout, argv[0]);
                return 0;
             case 'c':
                cont = 1;
                break;
             default:
                printHelp(stderr, argv[0]);
                return 1;
          }
       }
  4. Then, we must create another while loop, where we will fetch data from stdin with fgets():
    while(fgets(mph, sizeof(mph), stdin) != NULL)
       {
          /* Check if mph is numeric 
           * (and do conversion) */
          if( strspn(mph, "0123456789.-\n") == 
                strlen(mph) )
          {
             printf("%.1f\n", (atof(mph)*1.60934) );
          }
          /* If mph is NOT numeric, print error 
           * and return */
          else
          {
             fprintf(stderr, "Found non-numeric " 
                "value\n");
             if (cont == 1) /* Check if -c is set */
             {
                continue; /* Skip and continue if 
                           * -c is set */
             }
             else
             {
                return 1; /* Abort if -c is not set */
             }
          }
       }
       return 0;
    }
  5. Finally, we must write the function body for the help function:
    void printHelp(FILE *stream, char progname[])
    {
       fprintf(stream, "%s [-c] [-h]\n", progname);
       fprintf(stream, " -c continues even though a non" 
          "-numeric value was detected in the input\n"
          " -h print help\n");
    } 
  6. Compile the program using Make:
    $> make mph-to-kph_v2
    cc     mph-to-kph_v2.c   -o mph-to-kph_v2
  7. Let's try it out, without any options, by giving it some numeric values and a non-numeric value. The result should be the same as what we received previously:
    $> ./mph-to-kph_v2 
    60
    96.6
    40
    64.4
    hello
    Found non-numeric value
  8. Now, let's try it out using the -c option so that we can continue running the program even though a non-numeric value has been detected. Type some numeric and non-numeric values into the program:
    $> ./mph-to-kph_v2 -c
    50
    80.5
    90
    144.8
    hello
    Found non-numeric value
    10
    16.1
    20
    32.2
  9. That worked just fine! Now, let's add some more data to the avg.txt file and save it as avg-with-garbage.txt. This time, there will be more lines with non-numeric values. You can also download the file from https://github.com/PacktPublishing/Linux-System-Programming-Techniques/blob/master/ch2/avg-with-garbage.txt:
    10-minute average: 61 mph
    30-minute average: 55 mph
    45-minute average: 54 mph
    60-minute average: 52 mph
    90-minute average: 52 mph
    99-minute average: nn mph
    120-minute average: 49 mph
    160-minute average: 47 mph
    180-minute average: nn mph
    error reading data from interface
    200-minute average: 43 mph
  10. Now, let's run awk on that file again to see only the values:
    $> cat avg-with-garbage.txt | awk '{ print $3 }'
    61
    55
    54
    52
    52
    nn
    49
    47
    nn
    data
    43
  11. Now comes the moment of truth. Let's add the mph-to-kph_v2 program at the end with the -c option. This should convert all the mph values into kph values and continue running, even though non-numeric values will be found:
    $> cat avg-with-garbage.txt | awk '{ print $3 }' \
    > | ./mph-to-kph_v2 -c
    98.2
    88.5
    86.9
    83.7
    83.7
    Found non-numeric value
    78.9
    75.6
    Found non-numeric value
    Found non-numeric value
    69.2
  12. That worked! The program continued, even though there were non-numeric values. Since the error messages are printed to stderr and the values are printed to stdout, we can redirect the output to two different files. That leaves us with a clean output file and a separate error file:
    $> (cat avg-with-garbage.txt | awk '{ print $3 }' \
    > | ./mph-to-kph_v2 -c) 2> errors.txt 1> output.txt
  13. Let's take a look at the two files:
    $> cat output.txt 
    98.2
    88.5
    86.9
    83.7
    83.7
    78.9
    75.6
    69.2
    $> cat errors.txt 
    Found non-numeric value
    Found non-numeric value
    Found non-numeric value

How it works…

The code itself is similar to what we had in the previous recipe, except for the added getopt() and the help function. We covered getopt() in detail in Chapter 1, Getting the Necessary Tools and Writing Our First Linux Programs, so there's no need to cover it again here.

To continue reading data from stdin when a non-numeric value is found (while using the -c option), we use continue to skip one iteration of the loop. Instead of aborting the program, we print an error message to stderr and then move on to the next iteration, leaving the program running.

Also, note that we passed two arguments to the printHelp() function. The first argument is a FILE pointer. We use this to pass stderr or stdout to the function. Stdout and stderr are streams, which can be reached via their FILE pointer. This way, we can choose if the help message should be printed to stdout (in case the user asked for the help) or to stderr (in case there was an error).

The second argument is the name of the program, as we have seen already.

We then compiled and tested the program. Without the -c option, it works just as it did previously.

After that, we tried the program with data from a file that contains some garbage. That's usually how data looks; it's often not "perfect". That's why we added the option to continue, even though non-numeric values were found.

Just like in the previous recipe, we used awk to select only the third field (print $3) from the file.

The exciting part is Step 12, where we redirected both stderr and stdout. We separated the two outputs into two different files. That way, we have a clean output file with only the km/h values. We can then use that file for further processing since it doesn't contain any error messages.

We could have written the program to do all the steps for us, such as filter out the values from the text file, do the conversions, and then write the result to a new file. But that's an anti-pattern in Linux and Unix. Instead, we want to write small tools that do one thing only—and do it well. That way, the program can be used on other files with a different structure, or for a completely different purpose. We could even grab the data straight from a device or modem if we wanted to and pipe it into our program. The tools for extracting the correct fields from the file (or device) have already been created; there's no need to reinvent the wheel.

Notice that we needed to enclose the entire command, with pipes and all, before redirecting the output and error messages.

There's more…

Eric S. Raymond has written some excellent rules to stick to when developing software for Linux and Unix. They can all be found in his book, The Art of Unix Programming. Two of the rules that apply to us in this recipe include the Rule of Modularity, which says that we should write simple parts that are connected with clean interfaces. The other rule that applies to us is the Rule of Composition, which says to write programs that will be connected to other programs.

His book is available for free online at http://www.catb.org/~esr/writings/taoup/html/.

Reading environment variables

Another way to communicate with the shell—and to configure a program—is via environment variables. By default, there are a lot of environment variables already set. These variables contain information on just about anything regarding your user and your settings. Some examples include the username, which type of terminal you are using, the path variable we discussed in previous recipes, your preferred editor, your preferred locale and language, and more.

Knowing how to read these variables will make it much easier for you to adapt your programs to the user's environment.

In this recipe, we will write a program that reads environment variables, adapts its output, and prints some information about the user and the session.

Getting ready

For this recipe, we can use just about any shell. Other than a shell, we'll need the GCC compiler.

How to do it…

Follow these steps to write a program that reads environment variables:

  1. Save the following code into a file called env-var.c. You can also download the whole program from https://github.com/PacktPublishing/Linux-System-Programming-Techniques/blob/master/ch2/env-var.c. This program will read some common environment variables from your shell using the getenv() function. The strange-looking number sequences (\033[0;31) are used to color the output:
    #include <stdio.h>
    #include <stdlib.h>
    #include <string.h>
    int main(void)
    {
       /* Using getenv() to fetch env. variables */
       printf("Your username is %s\n", getenv("USER"));
       printf("Your home directory is %s\n", 
          getenv("HOME"));
       printf("Your preferred editor is %s\n", 
          getenv("EDITOR"));
       printf("Your shell is %s\n", getenv("SHELL"));
       /* Check if the current terminal support colors*/
       if ( strstr(getenv("TERM"), "256color")  )
       {
          /* Color the output with \033 + colorcode */
          printf("\033[0;31mYour \033[0;32mterminal "
             "\033[0;35msupport "
             "\033[0;33mcolors\033[0m\n");
       }
       else
       {
          printf("Your terminal doesn't support" 
             " colors\n");
       }
       return 0;
    }
  2. Compile the program using GCC:
    $> gcc env-var.c -o env-var
  3. Run the program. The information that will be printed for you will differ from mine. The last line will also be in color if your terminal supports it. If it doesn't, it will tell you that your terminal doesn't support colors:
    $> ./env-var 
    Your username is jake
    Your home directory is /home/jake
    Your preferred editor is vim
    Your shell is /bin/bash
    Your terminal support colors
  4. Let's investigate the environment variables we used by using echo. Make a note of the $TERM variable. The dollar sign ($) tells the shell that we want to print the TERM variable, not the word TERM:
    $> echo $USER
    jake
    $> echo $HOME
    /home/jake
    $> echo $EDITOR
    vim
    $> echo $SHELL
    /bin/bash
    $> echo $TERM
    screen-256color
  5. If we were to change the $TERM variable to a regular xterm, without color support, we would get a different output from the program:
    $> export TERM=xterm
    $> ./env-var 
    Your username is jake
    Your home directory is /home/jake
    Your preferred editor is vim
    Your shell is /bin/bash
    Your terminal doesn't support colors
  6. Before moving on, we should reset our terminal to the value it was before we changed it. This will probably be something else on your computer:
    $> export TERM=screen-256color
  7. It's also possible to set an environment variable temporarily for the duration of the program. We can do this by setting the variable and executing the program on the same line. Notice that when the program ends, the variable is still the same as it was previously. We just override the variable when the program executes:
    $> echo $TERM
    xterm-256color
    $> TERM=xterm ./env-var
    Your username is jake
    Your home directory is /home/jake
    Your preferred editor is vim
    Your shell is /bin/bash
    Your terminal doesn't support colors
    $> echo $TERM
    xterm-256colo
  8. We can also print a complete list of all the environment variables using the env command. The list will probably be several pages long. All of these variables can be accessed using the getenv() C function:
    $> env

How it works…

We use the getenv() function to get the values from the shell's environment variables. We print these variables to the screen.

Then, at the end of the program, we check if the current terminal has color support. This is usually denoted by something such as xterm-256color, screen-256color, and so on. We then use the strstr() function (from string.h) to check if the $TERM variable contains the 256color substring. If it does, the terminal has color support, and we print a colorized message on the screen. If it doesn't, however, we print that the terminal doesn't have color support, without using any colors.

All of these variables are the shell's environment variables and can be printed with the echo command; for example, echo $TERM. We can also set our own environment variables in the shell; for instance, export FULLNAME=Jack-Benny. Likewise, we can change existing ones by overwriting them, just as we did with the $TERM variable. We can also override them by setting them at runtime, like we did with TERM=xterm ./env-var.

Regular variables set with the FULLNAME=Jack-Benny syntax are only available to the current shell and are hence called local variables. When we set variables using the export command, they become global variables or environment variables, a more common name, available to both subshells and child processes.

There's more…

We can also change environment variables and create new ones in a C program by using the setenv() function. However, when we do so, those variables won't be available in the shell that started the program. The program we run is a child process of the shell, and hence it can't change the shell's variable; that is, its parent process. But any other programs started from inside our own program will be able to see those variables. We will discuss parent and child processes in more depth later in this book.

Here is a short example of how to use setenv(). The 1 in the third argument to setenv() means that we want to overwrite the variable if it already exists. If we change it to a 0, it prevents overwriting:

env-var-set.c

#define _POSIX_C_SOURCE 200112L
#include <stdio.h>
#include <stdlib.h>
int main(void)
{
    setenv("FULLNAME", "Jack-Benny", 1);
    printf("Your full name is %s\n", getenv("FULLNAME"));
    return 0;
}

If we compile and run the program and then try to read $FULLNAME from the shell, we'll notice that it doesn't exist:

$> gcc env-var-set.c -o env-var-set
$> ./env-var-set 
Your full name is Jack-Benny
$> echo $FULLNAME
Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Develop a deeper understanding of how Linux system programming works
  • Gain hands-on experience of working with different Linux projects with the help of practical examples
  • Learn how to develop your own programs for Linux

Description

Linux is the world's most popular open source operating system (OS). Linux System Programming Techniques will enable you to extend the Linux OS with your own system programs and communicate with other programs on the system. The book begins by exploring the Linux filesystem, its basic commands, built-in manual pages, the GNU compiler collection (GCC), and Linux system calls. You'll then discover how to handle errors in your programs and will learn to catch errors and print relevant information about them. The book takes you through multiple recipes on how to read and write files on the system, using both streams and file descriptors. As you advance, you'll delve into forking, creating zombie processes, and daemons, along with recipes on how to handle daemons using systemd. After this, you'll find out how to create shared libraries and start exploring different types of interprocess communication (IPC). In the later chapters, recipes on how to write programs using POSIX threads and how to debug your programs using the GNU debugger (GDB) and Valgrind will also be covered. By the end of this Linux book, you will be able to develop your own system programs for Linux, including daemons, tools, clients, and filters.

Who is this book for?

This book is for anyone who wants to develop system programs for Linux and gain a deeper understanding of the Linux system. The book is beneficial for anyone who is facing issues related to a particular part of Linux system programming and is looking for specific recipes or solutions.

What you will learn

  • Discover how to write programs for the Linux system using a wide variety of system calls
  • Delve into the working of POSIX functions
  • Understand and use key concepts such as signals, pipes, IPC, and process management
  • Find out how to integrate programs with a Linux system
  • Explore advanced topics such as filesystem operations, creating shared libraries, and debugging your programs
  • Gain an overall understanding of how to debug your programs using Valgrind

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : May 07, 2021
Length: 432 pages
Edition : 1st
Language : English
ISBN-13 : 9781789951288
Category :
Tools :

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing

Product Details

Publication date : May 07, 2021
Length: 432 pages
Edition : 1st
Language : English
ISBN-13 : 9781789951288
Category :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total $ 144.97 153.97 9.00 saved
Mastering Embedded Linux Programming
$45.99 $54.99
Linux System Programming Techniques
$43.99
Linux Kernel Programming
$54.99
Total $ 144.97 153.97 9.00 saved Stars icon
Banner background image

Table of Contents

13 Chapters
Chapter 1: Getting the Necessary Tools and Writing Our First Linux Programs Chevron down icon Chevron up icon
Chapter 2: Making Your Programs Easy to Script Chevron down icon Chevron up icon
Chapter 3: Diving Deep into C in Linux Chevron down icon Chevron up icon
Chapter 4: Handling Errors in Your Programs Chevron down icon Chevron up icon
Chapter 5: Working with File I/O and Filesystem Operations Chevron down icon Chevron up icon
Chapter 6: Spawning Processes and Using Job Control Chevron down icon Chevron up icon
Chapter 7: Using systemd to Handle Your Daemons Chevron down icon Chevron up icon
Chapter 8: Creating Shared Libraries Chevron down icon Chevron up icon
Chapter 9: Terminal I/O and Changing Terminal Behavior Chevron down icon Chevron up icon
Chapter 10: Using Different Kinds of IPC Chevron down icon Chevron up icon
Chapter 11: Using Threads in Your Programs Chevron down icon Chevron up icon
Chapter 12: Debugging Your Programs Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.8
(8 Ratings)
5 star 87.5%
4 star 0%
3 star 12.5%
2 star 0%
1 star 0%
Filter icon Filter
Top Reviews

Filter reviews by




Mouad Mar 25, 2024
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Thank you so much artistic style to learn seamlessly
Subscriber review Packt
April Jun 02, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This is a well written book that gives you everything you need to know about Linux Programming. It provides practical examples and lays things out in a easy to learn manner. Highly Reccomended
Amazon Verified review Amazon
Ramon Fried Jun 13, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Full disclosure: I was the technical reviewer for the book.This is an excellent book for beginners and intermediate developers who would like to extend their knowledge in system programming.It gives practical examples and recipes around the most important concepts of system development: Working with files, manipulating streams, dealing with errors, etc.
Amazon Verified review Amazon
Amazon Customer May 25, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I was always very fond of Linux and Jack-Benny has done an excellent job in writing this book. It is a very in-depth resource for anyone who is interested in Linux system programming techniques.
Amazon Verified review Amazon
Donald A. Tevault May 24, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This is a nicely written book that can be enjoyed by both beginner and advanced C programmers. It starts out by explaining the basics of installing and using C programming tools on a Linux system, and goes all the way through to more advanced topics like creating your own system daemons. Instead of bogging you down with long treatises about programming theory, the author demonstrates everything with working program code, and then explains what's going on with it. The explanations are clear, concise, and easy-to-read. If you want to get into C programming for Linux systems, you won't go wrong by picking up a copy of this book.Full disclosure: I'm a fellow author for Packt Publishing.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is included in a Packt subscription? Chevron down icon Chevron up icon

A subscription provides you with full access to view all Packt and licnesed content online, this includes exclusive access to Early Access titles. Depending on the tier chosen you can also earn credits and discounts to use for owning content

How can I cancel my subscription? Chevron down icon Chevron up icon

To cancel your subscription with us simply go to the account page - found in the top right of the page or at https://subscription.packtpub.com/my-account/subscription - From here you will see the ‘cancel subscription’ button in the grey box with your subscription information in.

What are credits? Chevron down icon Chevron up icon

Credits can be earned from reading 40 section of any title within the payment cycle - a month starting from the day of subscription payment. You also earn a Credit every month if you subscribe to our annual or 18 month plans. Credits can be used to buy books DRM free, the same way that you would pay for a book. Your credits can be found in the subscription homepage - subscription.packtpub.com - clicking on ‘the my’ library dropdown and selecting ‘credits’.

What happens if an Early Access Course is cancelled? Chevron down icon Chevron up icon

Projects are rarely cancelled, but sometimes it's unavoidable. If an Early Access course is cancelled or excessively delayed, you can exchange your purchase for another course. For further details, please contact us here.

Where can I send feedback about an Early Access title? Chevron down icon Chevron up icon

If you have any feedback about the product you're reading, or Early Access in general, then please fill out a contact form here and we'll make sure the feedback gets to the right team. 

Can I download the code files for Early Access titles? Chevron down icon Chevron up icon

We try to ensure that all books in Early Access have code available to use, download, and fork on GitHub. This helps us be more agile in the development of the book, and helps keep the often changing code base of new versions and new technologies as up to date as possible. Unfortunately, however, there will be rare cases when it is not possible for us to have downloadable code samples available until publication.

When we publish the book, the code files will also be available to download from the Packt website.

How accurate is the publication date? Chevron down icon Chevron up icon

The publication date is as accurate as we can be at any point in the project. Unfortunately, delays can happen. Often those delays are out of our control, such as changes to the technology code base or delays in the tech release. We do our best to give you an accurate estimate of the publication date at any given time, and as more chapters are delivered, the more accurate the delivery date will become.

How will I know when new chapters are ready? Chevron down icon Chevron up icon

We'll let you know every time there has been an update to a course that you've bought in Early Access. You'll get an email to let you know there has been a new chapter, or a change to a previous chapter. The new chapters are automatically added to your account, so you can also check back there any time you're ready and download or read them online.

I am a Packt subscriber, do I get Early Access? Chevron down icon Chevron up icon

Yes, all Early Access content is fully available through your subscription. You will need to have a paid for or active trial subscription in order to access all titles.

How is Early Access delivered? Chevron down icon Chevron up icon

Early Access is currently only available as a PDF or through our online reader. As we make changes or add new chapters, the files in your Packt account will be updated so you can download them again or view them online immediately.

How do I buy Early Access content? Chevron down icon Chevron up icon

Early Access is a way of us getting our content to you quicker, but the method of buying the Early Access course is still the same. Just find the course you want to buy, go through the check-out steps, and you’ll get a confirmation email from us with information and a link to the relevant Early Access courses.

What is Early Access? Chevron down icon Chevron up icon

Keeping up to date with the latest technology is difficult; new versions, new frameworks, new techniques. This feature gives you a head-start to our content, as it's being created. With Early Access you'll receive each chapter as it's written, and get regular updates throughout the product's development, as well as the final course as soon as it's ready.We created Early Access as a means of giving you the information you need, as soon as it's available. As we go through the process of developing a course, 99% of it can be ready but we can't publish until that last 1% falls in to place. Early Access helps to unlock the potential of our content early, to help you start your learning when you need it most. You not only get access to every chapter as it's delivered, edited, and updated, but you'll also get the finalized, DRM-free product to download in any format you want when it's published. As a member of Packt, you'll also be eligible for our exclusive offers, including a free course every day, and discounts on new and popular titles.