Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Windows Server 2012 Automation with PowerShell Cookbook

You're reading from   Windows Server 2012 Automation with PowerShell Cookbook If you work on a daily basis with Windows Server 2012, this book will make life easier by teaching you the skills to automate server tasks with PowerShell scripts, all delivered in recipe form for rapid implementation.

Arrow left icon
Product type Paperback
Published in Mar 2013
Publisher Packt
ISBN-13 9781849689465
Length 372 pages
Edition 1st Edition
Arrow right icon
Author (1):
Arrow left icon
EDRICK GOAD EDRICK GOAD
Author Profile Icon EDRICK GOAD
EDRICK GOAD
Arrow right icon
View More author details
Toc

Table of Contents (19) Chapters Close

Windows Server 2012 Automation with PowerShell Cookbook
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface
1. Understanding PowerShell Scripting 2. Managing Windows Network Services with PowerShell FREE CHAPTER 3. Managing IIS with PowerShell 4. Managing Hyper-V with PowerShell 5. Managing Storage with PowerShell 6. Managing Network Shares with PowerShell 7. Managing Windows Updates with PowerShell 8. Managing Printers with PowerShell 9. Troubleshooting Servers with PowerShell 10. Managing Performance with PowerShell 11. Inventorying Servers with PowerShell 12. Server Backup Index

Using jobs


Many of the PowerShell scripts we will create will execute in a serial fashion, that is, A starts before B which starts before C. This method of processing is simple to understand, easy to create, and easy to troubleshoot. However, sometimes there are processes that make serial execution difficult or undesirable. In that situation, we can look at using jobs as a method to start a task, and then move it to the background so that we can begin the next task.

A common situation I ran into is needing to get information, or execute a command, on multiple computers. If it is only a handful of systems, standard scripting works fine. However, if there are dozens, or hundreds of systems, a single slow system can slow down the entire process.

Additionally, if one of the systems fails to respond, it has the possibility of breaking my entire script, causing me to scramble through the logs to see where it failed and where to pick it back up. Another benefit of using jobs is that each job has the ability to execute independent of the rest of the jobs. This way, a job can fail, without breaking the rest of the jobs.

How to do it...

In this recipe, we will create a long-running process and compare the timing for serial versus parallel processing. To do this, carry out the following steps:

  1. Create a long-running process in serial:

    # Function that simulates a long-running process
    $foo = 1..5
    Function LongWrite
    {
        Param($a)
        Start-Sleep 10
        $a
    }
    $foo | ForEach{ LongWrite $_ }
  2. Create a long-running process using jobs:

    # Long running process using jobs
    ForEach ($foo in 1..5)
    {
        Start-Job -ScriptBlock {
        Start-Sleep 10
        $foo } -ArgumentList $foo -Name $foo
    }
    Wait-Job *
    Receive-Job *
    Remove-Job *

How it works...

In the first step, we create an example long-running process that simply sleeps for 10 seconds and returns its job number. The first script uses a loop to execute our LongWrite function in a serial fashion five times. As expected, this script takes just over 50 seconds to complete.

The second step executes the same process five times, but this time using jobs. Instead of calling a function, this time we are using Start-Job that will simultaneously create a background job, start the job, and then return for more. Once all the jobs have been started, we use Wait-Job * to wait for all running jobs to complete. Receive-Job retrieves the output from the jobs, and Remove-Job removes the jobs from the scheduler.

Because of the setup and teardown process required for creating and managing jobs, the process runs for more than the expected 10 seconds. In a test run, it took approximately 18 seconds total to create the jobs, run the jobs, wait for the jobs to complete, retrieve the output from the jobs, and remove the jobs from the scheduler.

There's more...

  • Scaling up: While moving from 50 seconds to 18 seconds is impressive in itself (decreasing it to 36 percent of the original run-time), larger jobs can give even better results. By extending the command to run 50 times (instead of the original 5), run-times can decrease to 18 percent of the serial method.

  • Working with remote resources: Jobs can be used both locally and remotely. A common need for a server admin is to perform a task across multiple servers. Sometimes, the servers respond quickly, sometimes they are slow to respond, and sometimes they do not respond and the task times out. These slow or unresponsive systems greatly increase the amount of time needed to complete your tasks. Parallel processing allows these slow systems to respond when they are available without impacting the overall performance.

By using jobs, the task can be launched among multiple servers simultaneously. This way, the slower systems won't prompt other systems from processing. And, as shown in the example, a success or failure report can be returned to the administrator.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image