Puppet + Vagrant VS librarian-puppet

Only recently (call me old-fashioned) I’ve started working with Vagrant to first test my puppet manifests instead of committing everything, push to the puppet master and use test environments on live machines (VM or bare metal, doesn’t matter).
At the same time, I wanted to solve another outstanding problem I had with puppet: external modules. For months/years I’ve suffered from the pain of using git submodules to integrate external modules with the in-house ones, but enough is enough. Searching for a possible solution, I stumbled upon librarian-puppet which seemed to do everything I needed.

Following Librarian’s installation instructions, I moved my modules/ directory to a separate git repository and created a nice and tidy Puppetfile with all my modules (which at the moment were all internal modules). Nice, everything worked as expected, my modules were installed in my local puppet repo with `librarian-puppet install` and I can even add external modules from the Forge with all the dependencies automagically resolved. Happy unicorns are puking colorful rainbows and everything, buuuut…

What happens when I want to edit some internal module in my new, separated modules repository? And what about adding a new role with a couple of profiles that use external modules (defined in the Puppetfile)? BUMMER.

I have to commit everything in my puppet-modules repo and push it somewhere before testing it in Vagrant, because librarian-puppet doesn’t support raw directories. Moreover, I need to edit my Puppetfile with the new development branch!!

That’s not fine at all.

But thankfully both Vagrant and Puppet support more than one modules directory at once, so:

  • first, I’ve added back my in-house modules to the original puppet repo, under modules/
  •  then, I’ve set up librarian-puppet to use a different path to install the modules it manages:
librarian-puppet config path modules-contrib --global

(in my puppet repo, remeber to gitignore modules-contrib/!)

  • relevant fragment of my Vagrantfile:
    config.vm.provision :puppet do |puppet|
    
        puppet.manifests_path    = "#{$puppet_dir}/manifests"
        puppet.manifest_file     = "site.pp"
        puppet.module_path       = [ "#{$puppet_dir}/modules", "#{$puppet_dir}/modules-contrib"]
        puppet.hiera_config_path = "#{$puppet_dir}/hiera.yaml"
    
      end
    end
    
  • and finally, if you’re using like me a puppet master, add this line to puppet.conf:
    modulepath = /etc/puppet/modules:/etc/puppet/modules-contrib

Now, it’s just a matter of librarian-puppet install before provisioning Vagrant or just after pushing changes to the Puppet master.

Securely forward a TCP service with SSH

Sometimes you want to directly access a server on a remote LAN beyond a firewall and you don’t want to set up a VPN, or maybe you want to encrypt an unencrypted service in simple and easy way. If you can contact a [remote] SSH server, then you only need a ssh client, and that’s all!

Let’s see it more in deep:
ssh -fn -N -L 1080:remote_www.server.com:80 root@remote-ssh-proxy.server.com

The -N -L switches do the trick! The first parameter to the L siwtch (1080 in this example) will be the local port you will use to direct connect to the remote service, located at remote_www.server.com address on port 80. So, for example, you can point your browser to http://localhost:1080 and magically you will have established an encrypted connection to that web server (well, if you have a user/password for remote-ssh-prxy ;)
The -N switch is mandatory in this use case because it will disable the need of a program to be passed as an argument to ssh, permitting the tunnel-only connection.
The -fn is to put in background the connection, so the tunnel will stay open and your console won’t be blocked.

You can change the -L for -R which will do just the reverse. It will forward a port from the remote proxy to a local machine.

Tools every sysadmin should be aware of – part I

I’m going to start this list of tools that really help me improving my daily work with two (not so little) programs that really save you hours if you are administrating more then 10 machines. OCS Inventory and GLPI. These are two web-based programs (based on PHP+Apache+Mysql, so really easy to deploy) that can do almost all the dirty work of inventoring, organizing and managing your IT infrastructure.

OCS Inventory, with its agents, it’s the heart of the system: you have to install an agent on every machine you want to inventor (and you can do it with automated scripting even on Windows!) and this agent will recollect every bit of hardwar/software information on that computer and store it in OCS Invetory database. Now, firing up the web based interface, you can review all the data collected directly in OCS Inventory, although it’s a bit rough and not very user friendly. But the most important task of OCS inventory is not displaying data, but to collect it. Ihave inventored Windows XP, Windows Vista, every flavour of Linux, FreeBSD, OSX etc. And when you need to manage all this data…

…enter GLPI. GLPI is really thinked towards sysadmins/helpdesk which have to know everything about the hardware park they are managing. It sports a relatively simple user interface, a very very rich feature set and can “suck” all the data from OCS Inventory, in an automated form. GLPI is really amazing, if you are thinking about a feature you would need in this kind of programs, GLPI got it. Searches based on hardware data like MAC address? Got it. Software management so you can keep track in every moment of where you have put that particular Photoshop license? Got it. Contracts/warranties management, with integrated reminders? Got it.

You should definitely check these softwares out, they will make your life easier, granted!

Migrate a Firefox profile

Today, after the last crazy behaviour of my old Firefox profile, I finally decided to migrate it to a new, clean one. Yeah, because when you are experiencing things like: not installable add-ons/extensions, permitted pop-ups not working and pages not refreshing correctly it’s time to change profile.

So, what do you have to do to create a new, better working profile without loosing all your precious informations (bookmarks, history and passwords)?


$ firefox -ProfileManager #create a new profile, leave it as default and close FF
$ cd ~/.mozilla/firefox
$ cp xxxx.oldprofile/bookmarks.html yyyy.newprofile # bookmarks
$ cp xxxx.oldprofile/history.dat yyyy.newprofile # pages history
$ cp xxxx.oldprofile/signons2.txt yyyy.newprofile # autocompletation in forms
$ cp xxxx.oldprofile/key3.db yyyy.newprofile # passwords!!

next, start again Firefox and that’s it! All your history, bookmarks and passwords are there, ad you have now a 100% working Firefox, again.

Note:you can apply this tip to Windows, OSX and all the other platforms supported by Firefox, as well

Oneliner: play with dates

If you need to work with dates in a shell script, this oneline could be useful:

echo $((`date +%s` - $OFFSET))|awk '{print strftime("%Y-%m-%d",$1)}'

What it does: it takes the current system time, converts it to epoch, rest $OFFSET (which should be in seconds) and then convert it again in the forma YYYY-MM-DD (but you can use every output supported by strftime, man strftime). Useful if you want to do some quick date calculation without having to fight with months, leap years and so on.

Mount a FreeBSD NFS share under MacOSX

FreeBSD NFS server by default only accepts NFS connection arriving from ports > 1024, as a “security” measure. This prevents OSX clients to correctly mount NFS shares, because even if executed with sudo your FreeBSD server will still complain with something like:
kernel: NFS request from unprivileged port

To solve this, the easiest way is to add the -P parameter on the client side, mounting the share with

sudo mount_nfs -P server.address:/path/to/share /path/to/local/directory