We’re onto post 3 of our Puppet series. We have a master server, a client and we’ve made a connection. Its time to start getting into the real power of puppet and write our first module.
Module Make Up So what makes up a module. They can be as simple as an init.pp file or as complex as having directories of files, templates and just about everything else under the sun there to help control your infrastructure.
If you’re reading this, there is a good chance that you do some sort of project work within your job. My blog doesn’t attact a lot of people outside of IT and the typical IT guy is working on project X, Y or Z and sometimes all 3 at the same time. If this isn’t you, I’ll save you some time and you can stop reading now.
Ok, so you are working with your boss and he/she has assigned you a project.
In our first post, we gave you the overview of what puppet was and how to install the software so its available on your system. Now its time to get into the guts of the system and get your client connecting to your master server.
Configuration By default, puppet really wants to look for a server at puppet.. If you need to change this to a different server name, you’ll want to edit the /etc/puppet/puppet.
Back in January, I gave a presentation for the local LUG group (CIALUG) and while it was nice to put it all into one presentation, I’d like to go back and break down the various aspects of the presentation and show off what Puppet can do in your organization.
What is it? Puppet is a tool designed to manage the configuration of Unix-like and Microsoft Windows systems declaratively. The user describes system resources and their state, either using Puppet or Ruby DSL (domain-specific language).
Before VMworld, I laid out a few of my predictions of what I thought we’d see at the conference. Let’s see how I did…
vRAM is dead. The rumors were true. This was kind of a gimme but I’ll take it. So long vRAM licensing! vCloud 2.0. Got this one too though its vCloud 5.1 to line up the numbering scheme with vSphere. We have snapshots and multiple disk levels supported within the provider vDCs.