Wednesday, March 21, 2012

Month of Lunches - Day 15



Once again Don nails a chapter title. Today we are learning about variables and the different ways of working with them. In my past with PowerShell and working with other peoples scripts I can see that variables can and are used as a primary means of storing not only objects but also commonly used commands within a script.

To create a variable in PowerShell you need only declare a variable name preceded by a dollar sign ($) and then follow it with an equal sign (=) and what it contains. Lets give it a try.

                $computername = 'server1'

You just created a variable with server1 as its object. You can store multiple objects or collections into a variable as well.

                $computername = 'server1','server2','server3','localhost'

Now to recall the objects within the variable simply type the variables name.

                $computername

In your console you will see a list of all the objects stored within the variable. With the preceding example there is also something else that we learned in this chapter. Notice the single quotes around each name. This denotes to PowerShell that what is contained inside those single quotes is a literal string. This was a little confusing to me so I went over the examples in the book several times to make sure that I had it. Here is the example:

                $var = 'What does $var contain'
                $var

The output you will see on this is the string "What does $var contain". The second part of this example is what makes this make sense.

                $computername = 'server-r2'
                $phrase = "The computer name is $computername"
                $phrase

The output here will show "The computer name is server-r2". Give it a try. If we were to have enclosed the $phrase text string in single quotes the output would have been exactly as we had typed them, The computer name is $computername. Something to remember here. If you use a variable inside of another variable, as soon as you hit enter to store it, it parses it and stores that as the variable. If you happen to change the original variable, in this case $computername, the output of the $phrase would remain the same, The computer name is server-r2.

The second thing I found very useful in this chapter was the use of the backtick character. The backtick is an escape character in PowerShell. Basically it removes the Special meaning of other characters or adds special meaning to the characters following it. Here is the book example:

                $computername = 'Server-R2'
                $phrase = "`$computername contains $computername
                $phrase

The output of this variable will be "$computername contains Server-R2". Notice the first instance of the variable was not processed but the second was. That is because the backtick removes the power of the dollar sign to parse a variable.

Ok so you can store one or more objects in a variable, how is this useful. With the examples from the book we were simply using string values but in the lab work we take that a step farther. We are asked to pull information from the win32_Bios class for two computers stored as a variable and run it as a background job, receive that job information into a variable, View the variable, and then export that as a CLIXML document. Here is how I did it using variables and some of the other stuff I have learned along the way.

                $computer = 'mycomputername','localhost' (I only have one computer for this)
                $job = get-wmiobject win32_bios -computername $computer -asjob
                $biosinfo = Receive-Job -job $job
                $biosinfo
                $biosinfo | export-CLIXml -path c:\powershell\Bios_Info.xml
                import-clixml -path c:\powershell\bios_info.xml

Just to verify I imported the info to ensure it was stored properly. Variables can be pretty powerful stuff in PowerShell as you can see. There is so much more to learn about variables but I just don’t have the space today. I believe I will revisit this in another post. Until tomorrow, happy powershelling

Tuesday, March 20, 2012

Month of Lunches - Day 14


In this chapter we are going over Security in windows PowerShell. Here we learn about the default security settings, how to manage PowerShell with them, and the ramifications of what PowerShell can do.

First and foremost Don points out that with the design of PowerShell security was of the utmost importance. This was well thought out and implemented within PowerShell. In essence if you cant change a setting with the GUI then PowerShell will not change that. Now as with anything there are ways around this but its like the ages old adage says, "It keeps Honest People Honest".

By default PowerShell does not allow for the execution of script files. That’s right, a scripting environment that will not allow you to run scripts by default. You can type standard commands in the console without issue however. This is set by the Execution Policy. If you try to run a script what you will see is an error message stating "The execution of scripts is disabled on this system", and in order to change the Execution Policy you must be an administrator on that piece of hardware. To change the execution policy PowerShell provides a cmdlet that is as simple to guess as most of the others, Set-ExecutionPolicy, and there are five different Execution Policy settings that can be set to. These are listed below.
                Restricted - This disallows the running of any script on that particular system. Keep in mind this does not mean that you cannot collect data from that machine with                                                                 PowerShell it only means that the scripts cannot physically run in the shell on that machine.
                AllSigned - This setting will allow the running of scripts that are digitally signed by a Trusted Certification Authority.
                RemoteSigned - Setting your PowerShell to this setting will allow any script that is run locally on the machine but as in the previous setting any remote script will need to have a certificate from a Trusted CA to run. This is the Microsoft recommended security level to allow the most functionality with the least restrictions.
                Unrestricted - This setting is actually the least restrictive and will allow for both local and remote scripts to run on the host.
                Bypass - The last setting is primarily used by Programmers as a way to integrate PowerShell within their application. It bypasses the execution policy entirely.

Don mentions two other security measures that are implemented by default within PowerShell. The first of which is file association. This is not so much a security restriction in my mind, as it is just general good practice. The default file association for the .PS1 file extension, which is a PowerShell script file, opens these files in Notepad or the default text editor. The second thing PowerShell does is not allow for scripts to be launched from within the shell simply by typing its name. For instance if you had a script file named Get_Services.PS1 on the root of your C: drive you could not run it simply by going to the root of C: and typing Get_Services.PS1. In order to run scripts from the console you have to preface the filename or path with .\ (dot Backslash).

That’s the basics and there is plenty more to cover on this topic including Active Directory group policy settings, ways to create your own certs or use locally asigned.

Month of Lunches - Day 13


In this chapter we are going to cover working with bunches of objects. This allows for the management of multiple PC's, services, or anything else to be managed across multiple computers with a single script or command. Don refers to this a Mass Management and it’s a very good interpretation of what you are doing.

There are a couple of different ways to accomplish tasks such as this. The first of which is the very basic ability to pipe objects from one cmdlet to another. For instance this example has come up in multiple chapters, but PLEASE DO NOT RUN THIS (unless you want to crash your computer).
               
                Get-Service | Stop-Service

Very simple what this command is doing is piping a collection of objects (in this case service objects) to the Stop-Service cmdlet and stopping them. This can be done with almost any of the Get cmdlets. This is the preferred way to work in PowerShell. If there is a cmdlet, use it. Don’t reinvent the wheel.

Sometimes cmdlets are not available and you have to find other ways to gather data and perform tasks against it. This is the case with WMI, and we covered this in more detail in chapter 11. Don uses a reference in the book of changing network configuration settings but a lot of the WMI objects have methods that allow for properties to be changed. You only need to know how to query for the information on your local as well as remote workstations and then pipe that information to the Invoke-WmiObjects to change the properties of that setting. This can be invaluable.

The next section is where things start to get a little trickier, especially for those of us that have no scripting or programming background. Enumerating of objects with the Foreach-Object cmdlet. Off to the help file I go and after about an hour of playing in my shell, reading the help, and web searching I think I have a better grasp of it. Basically it is a loop that checks each object of a collection and runs a command or set of commands that you designate. To put this as simply as possible here is an example from the help file.
               
                1, 2, $null, 4 | ForEach-Object {"Hello"}

What this will do is actually look at each string or variable as an object and display "HELLO" everytime it finds something. So what you will get as an output is: Hello, Hello, Hello, Hello. Imagine the possibilities of what can happen with this. I am still looking up info on this and ways that it can be used. Looking at scripts from my co-worker and basically anything that will be performed against multiple machines, accounts, services, etc… uses this method.

I Still have so much to learn and so far to go, but im loving every minute of it. Have a great day and happy powershelling. 

Month of Lunches - Day 12


So everybody wants to be more efficient with their work as well as at home. This chapter should help me solve at least half of that problem. This chapter covers multitasking and background jobs in PowerShell.

Normally with PowerShell you type a command and hit return, then you sit and wait for the command to finish. You cannot run another job until the first one completes. Now you could run a second window but what happens if you have specific modules loaded or have set variables for the task your completing. Or you could run the same command and have it moved to the background. The benefit of running a command as a job is that if it will be running for a while it will allow you to continue to use the shell and store the results for later.

There are some pitfalls that deal with having commands run in the background however. If it is in the background and prompts you for input then that job will not complete and stop eventually because you cannot reply. Running a command in the shell produces error messages for you to see, but background errors will not be visible until the job is retrieved. If the job is run in the shell you will see the display as soon as the command completes, however background jobs will have to be retrieved. And last but certainly not least remember this is not real time data but a snapshot of what was going on at the exact moment the job ran.

There are a couple kinds of background jobs discussed in this chapter. The first of which is a Local Job. A local job runs for the most part, as the title states, on your local workstation. It can request information from the remote PC's if necessary through the cmdlets, but allows your terminal to do the heavy lifting. The cmdlet for this is Start-Job. This cmdlet has a lot of really useful parameters so please read the help file for full details. The following is an example that will start a background job named Local_Process to Get-Process.

                Start-Job -scriptblock {get-service} -name 'Local_Process"

Some cmdlets have the ability to be run as jobs due to parameters. One of the major ones that Don lists is Get-WmiObject. There are others and If you want to see them do help * -parameter asjob.  You may also run jobs in the remoting tools we covered in Chapter 10. The cmdlet to do this is Invoke-Command.

To check the status of running or completed jobs type

                Get-Job

This will display the list of jobs in your current session only however. No previous jobs are cached. Note that when you do this that it displays an ID number, Name, State of the job, and HASMOREDATA column. This column shows if there is data to be retrieved for that job (PowerShell removes the data once the job is retrieved). To retrieve the results of the job you run Receive-Job. With this cmdlet you can bring up the table or list you created in your scriptblock, pipe it to the Format-List or Format-Table for custom formatting, or output it in any other way you like. It’s the same as any regular PowerShell object.

There are many ways to do background jobs, and as with anything in PowerShell, no two people do things alike. Just remember that if your getting your results it's not wrong.

Monday, March 19, 2012

Month of Lunches - Day 11


Today we are covering WMI or Windows Management Instrumentation. WMI is something that I have had some experience with in the past through SMS, CA Unicenter,  and other administrative applications. It opens up a whole new world of information that is ripe for the picking.

WMI is setup in a hierarchical structure similar to almost everything that Microsoft builds. WMI is held in Root\CIMv2. Under that you have the Namespaces. Below the namespaces you have the Classes and each class has a set of properties and methods, and a few other things. Don mentions something in this chapter I was not aware of and that can be a little tricky. Under the different classes there may be multiple instances of that class running for one reason or another (different user accounts, multiple instances of a service, etc…).

Exploring WMI can be a little bit daunting for the average administrator as there is no way of actually searching for anything. What this means is that everything is a hunt for the correct information you are looking for or you can use your trusty search engine.

The tools that you can use to look at the namespaces and properties can make this a little easier. Don mentions one in the book, and it is a good one, called WMI Explorer. It is a free tool that is found on on the www.primalscripts.com website under there downloads section. Now you do have to register for an account to download it, but it is worth the time as there are a lot of other useful tools from Primal that are free. There are also numerous downloads from www.CNET.com if you do a search for WMI Explorer. If you are in a bind and don’t have time to download anything (or cant in some cases), there is a WMI tool built into windows. It is called WBEMTEST, and works in a pinch but is a bit trickier to navigate. You can access this from the CMD prompt or the run box by typing WBEMTEST.exe This is a great article on Technet about its use and can be found here, http://technet.microsoft.com/en-us/library/cc785775%28v=ws.10%29.aspx

How does PowerShell access WMI? There is a handy little cmdlet called Get-WMIObject, or aliased as GWMI. Since PowerShell does not work directly with WMI there is a bit of a learning curve due to the difference in syntax structure and the way the WMI properties are referenced. For instance If you wanted to see a list of all namespaces from your console, type:
                Get-WMIObject –namespace root\cimv2 –list
This is a pretty expansive list but you can filter it if you have an idea of what you are looking for. The Help file will be of much more use than I could be trying to explain it so check it out (Help GWMI –full).

Point of note, PowerShell does not contain any help on the actual properties and methods for any of the classes of WMI. Also WMI has not been very well documented and was generally a use as you want it for the different product development groups within the company. That is until recently and now they are trying to change that. A good bit of information can be had from the Microsoft Developer Network (http://msdn.microsoft.com) now contains a lot of information, although search engines are still a good way of finding specific class info.

One thing I would like to add on here is that in the book Don references a way of showing all the software installed on a given machine with WMI. The class for this is Win32_Product, and can be a very bad thing to use in an enterprise. I have had issues in the past With SMS, CA Unicenter, and administrative scripts that called this class. Basically what this does is cause WMI to look at each software install and perform a Reconfiguration Operation as a way of verifying the application is installed. This will cause this query not only to be slow but is very processor and memory intensive on the servers that you run it against. You can verify this is happening by going to your event viewer under applications (I think), and you will see a Event ID 1035 for every program installed on your system. I would not recommend using this. The Registry is still a much safer bet.

Ill be updating tomorrow, but until then have a great day.

Friday, March 16, 2012

Any Experience out there with WDS\MDT and Powershell

I started with building custom images a few years ago when i learned about WinPE\BartPE and found that i could build custom boot disks to standardize the images that were being put on the end users desktops rather than cloning them. Back then i was working on a small domain of only a couple hundred users and PC's. Now its a domain of 6000 users, 300+ Servers, 3000 PC's, and 1500 thin clients using different Virtualization technologies, and i get to work with a team to standardize the desktop experience for them all.

So anyway we are in the process of changing over our imaging deployment solution from Symantec\Norton to Windows Deployment Solutions (WDS) and the Microsoft Deployment Toolkit (MDT). Almost everything that i have found so far can be automated with Powershell. MDT even provides you with the scripts when you perform an action.

The only issue that im having is that there is not alot of information or tools that i have been able to find to help with this automation. Im trying to find Consoles, HTA's, or Forms that can simplify this and im not having much luck. Anyone out there have experience with these? Any help would be appreciated.

Thursday, March 15, 2012

Just one of those Weeks

<Rant>
Have you ever just had one of those weeks where it seems like nothing is going right. Well that is what is going on with me. Im still playing catch up on everything at work and home from when i had the flu and today as im getting ready to post my month of lunches blogs by wife calls me into the living room, and guess what happens next. My beautiful three year old daughter gets on my computer, and my word documents are no where to be found.
I cant get mad at her but i am pretty upset with myself. i have one more to do to catch up with the rest of the five and then im going to try and get ahead so that if something like this happens again, and it will, i will be prepaired.
</Rant>
I hope at least some people are seeing this and getting something from it.

Month of Lunches - Day 10


Remote Control: One to One, and One to Many. This chapter is all about the remote functions of PowerShell and how to use them.

Don mentions something in the first part of this chapter that I noticed when I started with the help files in chapter 3. I got really excited when I saw the -computername parameter in the help file for Get-Service but didn’t really see to many in some of the other commands. This was strange to me because I know that PowerShell has a very powerful remoteing capability from everything that I have read.
So how does it work? In a nutshell you are pushing the commands out across the network and they are running on the remote machines and sending the information back to your console. PowerShell is using a specific service call WinRM. Now the really cool part about this is that all of PowerShells remote traffic is carried over HTTP and HTTPS protocols, and all of the return PowerShell objects are being converted to xml.

Configuring WinRM for PowerShell is a fairly straightforward process and must be done on every machine that you want to run remote commands on. If you only have a few to do or are in a workgroup then you can just call the Enable-PSRemoting cmdlet. Basically what this command does is start the WinRm Service and its startup type to auto, sets up PowerShell as a user of WinRM, and will even set a firewall rule allowing winrm traffic. Now if you are on a domain you can make this super easy and just use a Group Policy Object template, or if your servers are 2008 r2 then the settings are built right in. They are located in: Computer Configuration/Administrative Templates/Windows Components/Remote Shell and Windows Remote Management.

To start a one-to-one connection in PowerShell use the Enter-PSSession cmdlet (I bet you didn’t know to close it you would use the inverse, Exit command: ) then you just specify the -computername parameter and away you go. Easy right. One of the best parts is it passes your local account for authentication and you will wind up with the same permissions on the remote computer (if your running as admin locally it will run your shell as admin on the remote system).

If you want to use one-to-many remoting then the next section is for you. As Don puts it in the book, this is full scale distributed computing. The cmdlet that does this is Invoke-Command. You can specify the remote computers with the computername parameter.

                Examples: Invoke-Command -Computername wkstn1,wkstn2,server1,server2,server3 -command (get-service)
                Invoke-Command -Computer (get-content c:\list.txt) -command {get-eventlog -newest 100) |  {where -property entrytype -   eq error}}

The really cool part about this last command is that the remote computers are processing all of the information, all at the exact same time, and your admin workstation is receiving all the info. Nothing more than collections is being done on your local machine. How cool is that. I cant wait to test this on a domain : )

There is so much more to cover that I am going to have to write a second post when I actually get a chance to implement and test the remote capabilities. Ill report back but until then have a great day.

Month of Lunches - Day 9


Todays chapter is "Filtering and Comparison". This is another great chapter about how to get the most out of your PowerShell viewing experience. We have covered a little of this in previous chapters just for example of another topic but now we get to dive right into what it is.

The Very first thing that Don covers in this chapter is Filter Left. Basically this means put your filters as far to the left (or as close to the beginning) as possible. This allows all the biggest part of the work to be done in the beginning of a command leaving less for the cmdlets at the end to do.

This where we are introduced to Where-Object or Where (Oddly enough while researching this I found that this also has another alias, the questions mark, ?).  Where-Object allows you to filter any type of object pretty much however you want, while its in the pipeline. To even grasp what this command can do you have to read and re-read the help text. To put the in perspective there are 35 different parameters that can be used in 31 different parameter sets. There are just so many ways to manipulate objects with this command.

Next we covered comparison operators. These are the standard ways of comparing two objects. These are:

                -eq          = Direct comparison of two objects to see if they are similar. This provides a true or false answer
                -ne          = Not Equal to. This is the inverse of -eq. It also provides true or false answer.
                -gt and -lt              = Greater Than and Less Than. Just think < and > from math and you get the idea
                -ge and -le            = Greater than or equal to and Less than or equal to.

You can also specify these to be case sensitive with -ceq, -cne, -cgt, -clt, and so on and so forth. One thing to remember here is that PowerShell uses $False and $True to represent false and true in a command.

This next section apparently make a whole lot more sense to people with programming experience and this will take me some time to fully grasp. Iterative commands. I understand what they do but don’t fully understand how they work or how to use them. If you know please explain this to me in laymen's terms.

The last and most important thing I took from this lesson is the $_ placeholder. This allows one object or string of a data set to be processed by the cmdlet at a time and not the collection as a whole. This one probably needs an example and I will use the one from the book but shorten it.

                Get-Service -computername (get-content c:\names.txt) | Where-Object -filter ($_ -notlike '*svc')

Ok so here is what is going on in this command. First the parenthetical portion of the command is run (get-content portion) and this is ran through the Get-Service -Computername. This collection are then passed through the pipeline. Now the collection is passed into the second parenthetical which is a comparison (-notlike) and then filtered with the Where-Object cmdlet. Makes sense right. Actually this does make sense to me.

Well that was it for today. Try some of this out on your own if your new to PowerShell and even if your not for a basic refresher.

Month of Lunches - Day 8

Today's chapter is over "Formatting, and why its done on the right". I cannot put this any simplier than Don did for the title of section 8.1 "Formatting: Making what you see Prettier". This Chapter is all about how to make those outputs a little more visually appealing and displaying the proper information where you want it.

The first thing that Don covers is the default formatting and how it works. In a nut shell, the formatting is done by XML files. These are labeled with a .Format.ps1XML. You can look find these files in the PowerShell installation directory.

VERY IMPORTANT: Do not modify these files in any way. If you open them never save any changes as they are digitally signed files and modifying them will break the signature, after which PowerShell is unable to use them. So that default formatting that you get on outputs, you will not get again.

Ok, so you want to see what is there? Open up your PowerShell console and follow along.

                CD $PSHome - This Will bring you to your installation folder

Lets just search for all of them to see what we have.

                DIR *format.ps1xml

                Certificate.format.ps1xml
                Diagnostics.Format.ps1xml
                DotNetTypes.format.ps1xml
                Event.Format.ps1xml
                FileSystem.format.ps1xml
                Help.format.ps1xml
                HelpV3.format.ps1xml
                PowerShellCore.format.ps1xml
                PowerShellTrace.format.ps1xml
                Registry.format.ps1xml
                WSMan.Format.ps1xml

You can see by my outputs here that I have installed PowerShell V3 on my own PC where I looked this up, but in V2 they should be very similar. Well lets open one up and take a look shall we.

                Notepad DotNetTypes.format.ps1xml

This will launch your notepad and you should see the xml file displayed. You can look up specific format layouts by there complete type name with the handy dandy, find window. Sorry about that, to much Blues Clues with my daughter. Anyway here you will see the way that object outputs are formated including the column header names (by property), Width, alignment in the table, and various other pieces of information. Basically how it is explained that this works is when you type a command, say Get-Process, the cmdlet goes out and gathers its data, and places that particular data object into the pipeline. As was explained to us before every command has an OUT- command following it even if you didn’t put one. It this case it is Out-Default, which passes the object to Out-Host. These out cmdlets are designed to work with the format.ps1xml file to format your data.

We also covered a number of Format- commands. The first of which was Format-Table. This cmdlet allows you to specify parameters to custom fit the table to your screen (-Autosize), Specify the particular properties to be displayed in the table (-Property), Group your data by a particular property value (-GroupBy), or even Wrap the text if some of the property fields are to long (-Wrap big surprise I bet). Are you starting to get the idea that they tried to dummy proof the cmdlets and there parameters.

Format-List and Format-Wide do exactly as they state. They are both used for Formatting lists which is good for data that you have a lot property values that you need to see. It just groups them by object as seen below with Get-Process | Format-List

                Id      : 432
                Handles : 149
                CPU     : 2.515625
                Name    : explorer

The last thing that few covered was another way to output data. Out-Gridview. In order to use this you must have the PowerShell IDE installed (which it is not by default). This displays a GUI table, similar to what you would see in Excel, that allows you to sort, rearrange columns, remove\add properties fields, or filter data. Super simple and great if you just want to pull something up and look at it. Well im way over my 500 words for the day but this chapter was well worth it.

Month of Lunches - Day 7


So todays lesson is "The Pipeline, Deeper". This topic went a little deeper than I expected and I had to read a lot of it twice. Not sure if that is because I am tired or just do not understand it. Yet! So for those of you who have not been following along the Pipeline or Pipe is how you easily pass the values of one cmdlet in PowerShell to another cmdlet to get further detail or request that something further be done with the data.

The first thing we cover in this chapter is pipeline input ByValue. What this means is you can pipe a value to a cmdlet. Let's look at the example from the book. Now pull up the full help for the Stop-Service cmdlet.

                Help Stop-Service -Full

You will notice that there are three parameter sets for this service. One of these parameter sets contains the -InputObject parameter as mandatory. Now scroll down and look for the -InputObject parameter.

                -InputObject <ServiceController[]>
                Specifies ServiceController objects representing the services to be stopped. Enter a variable that contains
                 the objects, or type a command or expression that gets the objects.

                Required?                               true
                Position?                                                 1
                Default value
                Accept pipeline input?            true (ByValue)
                Accept wildcard characters?                  false

What you should see is that this parameter Accepts Pipeline Input and that it can take that input ByValue. You should also note that this parameter will only take input of a variable that contains objects or obejects piped from a command that gets them.

Ok so what does that mean? This is the way that I understand it. Under the hood when you run something like:

                Get-Process -name BITS | stop-service

PowerShell is getting the BITS service object and sending it to the Stop-Service. Know I know that you can just do Stop-Service BITS but this is an example. Ok so what this means is that you can also do something like:

                "BITS" | Stop-Service

PowerShell is taking "BITS" as a Service Name object and piping that to stop service. You can do more by seperating them with a comma if you would like. Interesting. Now there is a second pipeline input that you may see as well. If you go back to your PowerShell window and look at the -Name parameter you will notice ByPropertyName.

                -Name <String[]>
                Specifies the service names of the services to be stopped. Wildcards are permitted.
                The parameter name is optional. You can use "Name" or its alias, "ServiceName", or you can omit the parameter
                name.

                Required?                               true
                Position?                                                 1
                Default value
                Accept pipeline input?            true (ByPropertyName, ByValue)
                Accept wildcard characters?                  true

ByPropertyName only picks up and attempts to run if ByValue does not.

This Chapter also contained quite a few little Tips and Tricks like Creating multiple user accounts in Active Directory in seconds from PowerShell by using properly formatted CSV files and piping that information to New-ADUser, how you rename and set customer properties in CSV files, and how to use parentheses to specify what part of command runs first to ensure the proper results.

I have a better grasp of the pipeline but there is still so much to learn and understand.

Tuesday, March 13, 2012

Month of Lunches - Day 6


Today we are going to Cover something that I had touched on previously in Day 4. PowerShell and its use of Objects.
Every time that you run a cmdlet in PowerShell and get output to your screen you are seeing text. But under the hood PowerShell is actually creating objects (unless otherwise told). There is a ton of information that you cannot see immediately, but if you know where and how to look there is a wealth of information. For example, as I stated previously running the Get-Process command only displays 8 or so pieces of information but if you export it to a CSV or HTML file you will see that there are 60+ columns of information that are actually within the output.
Don explains the way PowerShell is actually handling this data better than anyone for me to date. What you are seeing as a table when its displayed is actually made up of four things. To make this easier to visualize lets export our process list to CSV file. Open your PowerShell windows and type:
                Get-Process | Export-CSV <path>\<filename>
Open the CSV file you just created and follow along. What PowerShell is actually showing is:
1.        Objects: Each row of the CSV file is actually an individual object.
2.        Properties: These are the columns. Each column header has a name and these are the properties of that object.
3.        Methods: These are not actually shown. Methods are the actions that can be taken against any or all of the objects.
4.        Collection: This is basically the entire table. It is the sum of all the objects, or rows, there associated properties, or columns, and the methods, or actions, that can be taken against them.
PowerShell creates and uses these objects for a couple of reasons. The first of which Don points out is that Windows is an object oriented operating system, and most of its programs are object oriented. Windows being object oriented makes it extremely easy for PowerShell to use these objects and create its own.  The Second reason is for ease of use. Using objects makes parsing the data infinitely easier.
Ok so you don’t want to have to output everything to CSV or HTML to find out what the properties and methods are do you? No, of course not. Well this is where Get-Member comes in. You can pipe nearly every command to Get-Member, or its alias GM,  cmdlet and it will display all the associated properties and methods. There are actually several different kinds of properties (ScriptProperty, Property, NoteProperty, and AliasProperty) but these are not important right now. They are all just properties for now. Let's take a look at them.
                Get-Process | GM
This will display all of associated 60+ properties, methods, and events of the collection. This is important because you can display and sort these properties in whatever fashion you want with a couple of cmdlets. Select-Object and Sort-Object.
Say you wanted to display a list of running process' in order by name, and you wanted to display only the  Name, Memory Usage, ID, and Start Time. You could run the following command
                Get-Process | Select-Object -property name,VM,ID,StartTime | sort-object -property name
Or you could shorten it with alias'
                Get-Process | Select -prop name,vm,id,starttime | sort -prop name
As you can see with the Select-Object you can specify multiple properties by separating them with commas, but do not use spaces. PowerShell will see that as the start of a new variable and error. This can be done with any cmdlet that has multiple properties. You could also specify methods in this manner but the important ones have cmdlets of their own (method: Kill , cmdlet: Stop-Process).
Just remember that almost everything in PowerShell is an object that has properties, and they can be manipulated. Everything is an object until you tell it not to be.

Month of Lunches - Day 5

Today we are discussing Adding Commands into PowerShell. PowerShell has two primary ways that these Cmdlets can be added. With the addition of these cmdlets PowerShell can handle the management of many of Microsoft's products. Some of those are Exchange, Sharepoint, SQL, the System Center suite of products, IIS, and many more. Even non-Microsoft software vendors are getting in on the action with their own modules and snap-ins, including VMWare, NetAPP, Citrix, and More. Just look up your Enterprise (and in some cases desktop) software and chances are that the manufacturer has PowerShell cmdlets or someone has created some (always test these before using them on production systems). Both make working with particular programs easier in PowerShell.

Snap-ins generally consist of DLL's, XML's, and help text. To add a Snap-in the cmdlet is Get-PSSnapin. You can also find a list of the available snap-ins, if there are any, by typing Get-PSSnapin -Registered. To add a new snap-in once it is found, Get-PSSnapin <Snap-in_Name>. Want to see what commands or PSDrives may have been added by that snap-in? Get-Command -PSSnapin <Snap-In Name> and Get-PSProvider, respectively. There are other snap-in specific commands and parameters and if you have been following along with my MOL then you should know how to find them.

The second type of extension that is covered by Don is the module. The cmdlet for adding modules is Import-Module. Modules will be automatically visible to PowerShell if they are in one of two folders. The first is your personal modules and will only be accessible if you are logged in under your profile. It is located in your "%username%\Documents\WindowsPowerShell\Modules" folder. The second is the System modules and will be accessible from any profile on the PC. It is located in the "C:\Windows\System32\WindowsPowerShell\v1.0\Modules" folder. You can also add a module manually with its full path from the import command. To get a list of available modules type Get-Module -listavailable. To import a particular module Import-Module <module_name>. You can also view the available cmdlets for a particular module, the same as with snap-ins, with Get-Command -Module <Module_Name>.

If there is a conflict with any of the modules or snap-ins then you can always remove them. To remove them the cmdlets are Remove-PSSnapin and Remove-Module. And as always read the help…

The next section of this chapter was over Server Manager via command line. There is an entire module dedicated to the adding, modifying, and removing server roles and features via the command line via its three cmdlets: Add-WindowsFeature, Get-WindowsFeature, and Remove-WindowsFeature. I find this module especially tempting to use and only wish there was an associated cmdlet that would allow the same to be done on the workstation side (alas there is not as of yet, maybe in V3?).

Now, one thing to remember about Snap-ins and Modules is that they are available only in the PowerShell session that you have open, but fear not there is a way to ensure they are available every time that you open the shell without having to import or add them. Profile Scripts! Our very first "Script". Don Covers this at length with do's and don'ts in another blog post here: "http://tinyurl.com/7pf5egw". These will allow you to configure, add modules and snap-ins, and a number of other features. I suggest you read this article and look up the number of others you can find on the same topic.

Until Tomorrow.

Monday, March 12, 2012

Month of Lunches - Day 4

Sorry for the delay folks but the Flu can do that to you.

Ok so now we start to get to the meat of what makes PowerShell so Powerful (pardon the Pun), and that is the ability to connect commands. Chapter 4 is all about pipelining or piping one command to another and some of the output cmdlets within "The Shell".

The first thing we cover is the outputting of commands to Comma Separated Value (CSV) files and Command Line Interface Extensible Markup Language (CliXML or just XML) files. The Export-CSV, and Export-CliXML and their inversely named Import commands allow for an easy way to output and archive almost any information that you can get out of PowerShell or import previously exported information for reference or for use.

Let's try something shall we? In your console type

                Get-Process

This will display a list of all running process on your computer at that very moment as well as the process ID, CPU utilization percentage, memory utilization, and a few other pieces of information. In PowerShell this is actually an object and only a small portion of the information that is gathered is actually displayed. Don't believe me? Want to see the rest? Let's export that to a CSV file.

                Get-Process | Export-CSV <filepath>\process.csv (filepath being where you created the file before)

There are literally 60 plus columns of information. In fact all of the Get- cmdlets i tested are the exact same way. Export-CliXML will show you the same thing, only in a slightly less readable, but very useful xml structure (if you know what you are working with).

Now say you have two computers that are supposed to be identical but want to make sure (very useful if you are troubleshooting issues with programs at startup). Well you're in luck. You have the ability to compare files. Compare-Object, or better known by its alias DIFF, can take two objects and compare them and report the differences. You can even compare two CSV or XML files for that matter. An important thing that Don notes here in the book is that PowerShell is not very good at comparing text documents. This is good to remember and will save you headaches later.

Ok so let's give this one a shot. Lets open a couple of miscellaneous programs like Calculator, Notepad, and Paint. Remember the export from earlier, let's compare it to our current process'. We only want to see what is new by the process name so lets specify that with the -Name parameter. The reason we do this is that we are comparing all the process' and their associated information and something is bound to have changed, like memory usage, CPU utilization, or any other piece of information.

Compare-Object -ReferenceObject (Import-CSV <filepath>\process.csv) -DifferenceObject (get-process) -Property Name

Phew that is alot to type. No worries. Remember in PowerShell you can use alias' or shorten parameter names as long as what you use is different enough that PowerShell knows what you are typing. For instance the below command will do the same thing.

Diff -Ref (ipcsv <filepath>\process.csv) -Diff (get-process) -prop Name

Half the typing and all the power. You can even add other property names to show other results if you want for different outputs.

This is only the beginning, you can even convert the output files to HTML (Convert-HTML) that can viewed in any browser, pipe them to a text file (Get-Process | out-file <filename>), output them to a printer (get-process | out-printer), as well as a few other out commands. As we went over on the previous day, use the Get-Help or the Get-Command cmdlets to find and learn more about the other Out- cmdlets or any cmdlets for that matter.

Tuesday, March 6, 2012

Month of Lunches - Day 3


In today's chapter we will be going over "Using the Help System". This may be the most important chapter in the book, at least it was for me. As Don describes it, "If you aren't willing to read PowerShell's help files, you won't be effective with PowerShell". I have to admit that I was guilty of this in my first learning experience with PowerShell, so I promised myself I would do better this time around. You need to fully read everything in these help files and understand the commandlet use. My fellow Month of Luncher Zack said it best when he said "System Administrators are skimmers by trade", and we are, but please READ the help files.
The primary commandlet here is GET-HELP or the alias' Help and Man. This will allow you to view the available help on other commandlets or help topics. You can even Get-Help on the Get-Help commandlet. The Syntax for using the help for the Get-Service commandlet would be  Get-Help Get-Service.  The Basic help information will show you the Name, Synopsis, Syntax, Description, and any related commandlets or additional help files associated. There are a few parameter options here that are very useful, as well as the ability to use wildcards to assist in the lookup of help articles if you don't know exactly what it is you are looking for.
The first parameter to remember is  -Examples. This will show you syntactically correct examples on commandlet use. It will also show you a description of what the typed command is doing.
The second is the  -Detailed  parameter. This is where things get a little more interesting. Using this parameter shows all the same things as shown when just using the Get-Help  commandlet but also gives you all of the available parameters for the commandlet and their descriptions and use.
The Third is the  -Full  parameter. This will show you all of the information that is contained in the help file.
Something important to note here for people just starting out with PowerShell or who have very limited experience with scripting (This took me a while to understand). Not all Parameters can be used with each other (Duh!). This is due to commandlets having Parameter Sets. When using the Get-Help command you will see the Syntax section of the help file, and it may contain two, three, or more instances of the commandlet. An example from Get-Service:
Get-Service [[-Name] <string[]>] [-ComputerName <string[]>] [-DependentServices] [-Exclude <string[]>] [-Include <string[]>] [-RequiredServices] [<CommonParameters>]
Get-Service -DisplayName <string[]> [-ComputerName <string[]>] [-DependentServices] [-Exclude <string[]>] [-Include <string[]>] [-RequiredServices] [<CommonParameters>]
Get-Service [-InputObject <ServiceController[]>] [-ComputerName <string[]>] [-DependentServices] [-Exclude <string[ ]>] [-Include <string[]>] [-RequiredServices] [<CommonParameters>]
These show the Parameter Sets or what parameters can be used together. This can save you hours of trying to figure out what is wrong with your syntax.
However, the use of wildcards is what I believe makes this a very powerful learning tool for PowerShell. A simple asterisk or two can help you look up all commandlets associated with process' (get-help *process*), or allow you find the about_help files (get-help about*). The Get-Help even searches the text of the help files and with the help of the wildcards you can find just about anything (example from the book: Get-Help *Breaking*). You can read more about wildcards in the About_Wildcards help.
                  I challenge anyone reading this who doesn't already use the help to do so. This will make the learning curve much shallower and it is built-in. You can't beat FREE help. There are also some other ways to view the help files including IPad\IPhone Apps, Android Apps, the downloadable online help files from Microsoft (CHM files), TechNet articles, as well as scripts that construct help file graphical browsers (http://mng.bz/5w8e) from the available help files in PowerShell.

Month of Lunches - Day 2

Ok so my first Blog Article on the Month Of Lunches learning adventure. Where to begin? I guess a little background on me I have loved computers my whole life, but my first real computer "Job" was the United States Navy. Anyone who is prior service with any branch of the military (except maybe the Air Force) will tell you that the learning process can be a little less ease in to it and a lot more trial by fire. Over my 5 years in the service and 10 years with the government I guess I would say I dabbled in a little of everything. That being said, I have zero background in programming and my command line experience comes almost entirely from Batch scripting program installations and patch deployments. This has been the majority of my life for the last 2+ years. I started to learn Powershell over a year ago from one of the local SysAdmins (our very own Powershell Evangelist Marc Carter). I have always been the type to say "Oh Programming is not for me", "I don't have the patience or the time to learn", or "I'll let someone else do it for me". But you have to start somewhere and I really want to advance in my chosen career field, so here it goes. On to the Content you are here for. What I have learned so far is that I can use almost all of my command line and batch scripting experience! Some things will change but for the most part I can use the same commands. This is incredible to me. Not having to immerse myself in code was a great relief. Easing in is a much better way to learn I think. PSDrives are going to be very helpful in a lot of situations. Imagine Being able to browse registry keys and values like they are directories. Makes for an incredibly powerful tool right? People generally don't play around in the registry but I have to dig into 3000+ systems on an almost daily basis. Knowing that soon enough I should be able to easily automate, aggregate, modify or display all the information I could want without much of a fight. WOW! I did run into one issue that took me a minute to figure out (Read as ‘A web search for answers'). I began this tutorial going through the command Shell and switched to the ISE, because it looks better. I was able to finish all the tasks of the Chapter, except for one. I Needed to Display the contents of a text file, that I had output the process information to, page by page. I couldn't figure out where I was going wrong with the syntax. Type c:\filename | More Looks right to me. Lets go back to the boot example. DIR | More Run this against the Windows\system32 directory. It just scrolls through the file and directory information. After slamming my head against a wall for 5 minutes I decided to look it up. Low and behold I was right it just didn't work. Apparently the ISE doesn't handle the | More piped command so well. Works fine in the shell though. I was able to finish and even on the first chapter learn a lot more than I already knew. Off to a great start.

Mounth Of Lunches

I am starting this blog as a way to capture my daily blogs for the Powershell Month of Lunches learning experiment with Microsoft PowerShell MVP, Don Jones. This will also be my daily sounding board for adventures in marriage and fatherhood. The last few years have been an interesting learning experience not only at work but at home as well.