Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
TensorFlow Machine Learning Cookbook

You're reading from   TensorFlow Machine Learning Cookbook Over 60 practical recipes to help you master Google's TensorFlow machine learning library

Arrow left icon
Product type Paperback
Published in Feb 2017
Publisher Packt
ISBN-13 9781786462169
Length 370 pages
Edition 1st Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
Nick McClure Nick McClure
Author Profile Icon Nick McClure
Nick McClure
Arrow right icon
View More author details
Toc

Table of Contents (13) Chapters Close

Preface 1. Getting Started with TensorFlow FREE CHAPTER 2. The TensorFlow Way 3. Linear Regression 4. Support Vector Machines 5. Nearest Neighbor Methods 6. Neural Networks 7. Natural Language Processing 8. Convolutional Neural Networks 9. Recurrent Neural Networks 10. Taking TensorFlow to Production 11. More with TensorFlow Index

Working with Matrices

Understanding how TensorFlow works with matrices is very important to understanding the flow of data through computational graphs.

Getting ready

Many algorithms depend on matrix operations. TensorFlow gives us easy-to-use operations to perform such matrix calculations. For all of the following examples, we can create a graph session by running the following code:

import tensorflow as tf
sess = tf.Session()

How to do it…

  1. Creating matrices: We can create two-dimensional matrices from numpy arrays or nested lists, as we described in the earlier section on tensors. We can also use the tensor creation functions and specify a two-dimensional shape for functions such as zeros(), ones(), truncated_normal(), and so on. TensorFlow also allows us to create a diagonal matrix from a one-dimensional array or list with the function diag(), as follows:
    identity_matrix = tf.diag([1.0, 1.0, 1.0])
    A = tf.truncated_normal([2, 3])
    B = tf.fill([2,3], 5.0)
    C = tf.random_uniform([3,2])
    D = tf.convert_to_tensor(np.array([[1., 2., 3.],[-3., -7., -1.],[0., 5., -2.]]))
    print(sess.run(identity_matrix))
    [[ 1.  0.  0.]
     [ 0.  1.  0.]
     [ 0.  0.  1.]]
    print(sess.run(A))
    [[ 0.96751703  0.11397751 -0.3438891 ]
     [-0.10132604 -0.8432678   0.29810596]]
    print(sess.run(B))
    [[ 5.  5.  5.]
     [ 5.  5.  5.]]
    print(sess.run(C))
    [[ 0.33184157  0.08907614]
     [ 0.53189191  0.67605299]
     [ 0.95889051  0.67061249]]
    print(sess.run(D))
    [[ 1.  2.  3.]
     [-3. -7. -1.]
     [ 0.  5. -2.]]

    Note

    Note that if we were to run sess.run(C) again, we would reinitialize the random variables and end up with different random values.

  2. Addition and subtraction uses the following function:
    print(sess.run(A+B))
    [[ 4.61596632  5.39771316  4.4325695 ]
     [ 3.26702736  5.14477345  4.98265553]]
    print(sess.run(B-B))
    [[ 0.  0.  0.]
     [ 0.  0.  0.]]
    Multiplication
    print(sess.run(tf.matmul(B, identity_matrix)))
    [[ 5.  5.  5.]
     [ 5.  5.  5.]]
  3. Also, the function matmul() has arguments that specify whether or not to transpose the arguments before multiplication or whether each matrix is sparse.
  4. Transpose the arguments as follows:
    print(sess.run(tf.transpose(C)))
    [[ 0.67124544  0.26766731  0.99068872]
     [ 0.25006068  0.86560275  0.58411312]]
  5. Again, it is worth mentioning the reinitializing that gives us different values than before.
  6. For the determinant, use the following:
    print(sess.run(tf.matrix_determinant(D)))
    -38.0
    • Inverse:
      print(sess.run(tf.matrix_inverse(D)))
      [[-0.5        -0.5        -0.5       ]
       [ 0.15789474  0.05263158  0.21052632]
       [ 0.39473684  0.13157895  0.02631579]]

    Note

    Note that the inverse method is based on the Cholesky decomposition if the matrix is symmetric positive definite or the LU decomposition otherwise.

  7. Decompositions:
    • For the Cholesky decomposition, use the following:
      print(sess.run(tf.cholesky(identity_matrix)))
      [[ 1.  0.  1.]
       [ 0.  1.  0.]
       [ 0.  0.  1.]]
  8. For Eigenvalues and eigenvectors, use the following code:
    print(sess.run(tf.self_adjoint_eig(D))
    [[-10.65907521  -0.22750691   2.88658212]
     [  0.21749542   0.63250104  -0.74339638]
     [  0.84526515   0.2587998    0.46749277]
     [ -0.4880805    0.73004459   0.47834331]]

Note that the function self_adjoint_eig() outputs the eigenvalues in the first row and the subsequent vectors in the remaining vectors. In mathematics, this is known as the Eigen decomposition of a matrix.

How it works…

TensorFlow provides all the tools for us to get started with numerical computations and adding such computations to our graphs. This notation might seem quite heavy for simple matrix operations. Remember that we are adding these operations to the graph and telling TensorFlow what tensors to run through those operations. While this might seem verbose now, it helps to understand the notations in later chapters, when this way of computation will make it easier to accomplish our goals.

You have been reading a chapter from
TensorFlow Machine Learning Cookbook
Published in: Feb 2017
Publisher: Packt
ISBN-13: 9781786462169
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image