The Desktop Revolution Is Upon Us

With the pending end of support for Windows XP looming just around the proverbial corner, it is time to take stock of the desktop landscape and make hard decisions. Windows XP has dominated the desktop landscape in both home and business for more than a decade. Sure Windows 7, and to some small degree Windows 8, have widely replaced it, there is still a huge Windows XP install base and many companies have failed to define their long term strategy in the post-XP world and are still floundering to find their footing.

A little context is, I feel, pretty important. Today it may seem a foregone conclusion that Microsoft will “own” the business desktop space with Mac OSX fighting for a little piece of the action that Microsoft barely notices. This status quo has been in place for a very long time – longer than the typical memory of an industry that experiences such a high degree of change. But things have not actually been this way for so long.

Let’s look instead to the landscape of 1995. Microsoft had a powerful home user product, Windows 95, and were beginning to be taken seriously in the business space. But their place there, outside of DOS, was relatively new and Windows 3.11 remained their primary product. Microsoft had strong competition from many fronts including Mac OS and OS/2 plus many smaller niche players. UNIX was making itself known in high end workstations. Linux existed by had not yet entered the business lexicon.

The Microsoft business desktop revolution happened in 1996 with the landmark release of Windows NT 4.0 Workstation. Windows NT 4 was such a dramatic improvement in the desktop experience, architecture, stability and networking capability that it almost instantly redefined the industry. It was Windows NT 4 that created the momentum that made Microsoft ubiquitous in the workplace. It was NT 4 that defined much of what we think of as modern computing. NT 4 displaced all other competitors relegating Mac OS to the most niche of positions and effectively, completely eliminating OS/2 and many other products. It was in the NT 4 era that the concept of the Microsoft Certified Professional and the MCSE began and where much of the corpus of rote knowledge of the industry was created. NT 4 introduced us to pure 32-bit computing in the x86 architectural space. It was the first mainstream operating system built with the focus being on being networked.

Windows NT 4 grew from interesting newcomer to dominate the desktop space between 1996 and 2001. In the interim Windows 2000 Pro was released but, like Vista, this was really a sidelined and marginalized technology preview that did little to displace the incumbent desktop product. It was not until 2001, with the release of Windows XP, that Windows NT 4 had a worthy successor. A product of extreme stability with enough new features and additional gloss to warrant a wide-spread move from the old platform to the new. NT 4 would linger on for many more years but would slowly fade away as users demanded newer features and access to newer hardware.

Windows NT 4 and Windows XP had a lot in common. Both were designed around stability and usability, not as platforms for introducing broad change to the OS itself. Both were incremental improvements over what was already available. Both received more large scale updates (Service Packs in Microsoft terms) than other OSes before and after them with NT 4 having seven (or even eight depending on how you count them) and XP having three. Each was the key vanguard of a new processor architecture, NT 4 with the 32bit x86 platform and XP being the first to have an option for the 64bit AMD64 architecture. Both were the terminal releases of their major kernel version. Windows NT 4 and Windows XP together held unique places in the desktop ecosystem with penetration numbers that might never be seen again by any product in that category.

After nearly eighteen years, that dominance is waning. Windows 7 is a worthy successor to the crown but it failed to achieve the same iconic status as Windows XP and it was rapidly followed by the dramatically changed Windows 8 and now Windows 8.1 both built on the same fundamental kernel as Windows 7 (and Vista too.)

The field is different today. Mobile devices; phones, tablets and the like; have introduced us to new operating system options and paradigms. The desktop platform is not a foregone conclusion as the business platform of choice. Nor is the Intel/AMD processor architecture a given as ARM has begun to make serious inroads and looks to be a major player in every space where Intel and AMD have held sway these last two decades.

This puts businesses into the position of needing to decide how they will focus their end user support energy in the coming years. There are numerous strategies to be considered.
The obvious approaches, those that I assume nearly all businesses will take if for no other reason than to maintain status quo, is to either settle into a “wait and see” plan that involves implementing Windows 7 today and hoping that the new interface and style of Windows 8 goes away or that they will find an alternative between now and when Windows 7 support ends. This strategy suffers from focusing on the past and triggering an earlier than necessary upgrade cycle down the road while leaving businesses behind on technology today. Not a strategy that I would generally recommend but very likely the most common strategy as it allows for the least “pain today” – a common trend in IT. Going with Windows 7 represents an accumulation of technical debt.

Those businesses willing to really embrace the Microsoft ecosystem will look to move to Windows 8 and 8.1 to get the latest features, greatest code maturity and to have the longest support cycle available to them. This, I feel, is more forward thinking and embraces some low threshold pain today in order to experience productivity gains tomorrow. This is, in my opinion, the best investment strategy for companies that truly wish to stick with the Microsoft ecosystem.

However, outside of the Microsoft world, other options are now open to us that, realistically, were not available when Windows NT 4 released. Most obvious is Apple’s Mac OSX Mavericks. Apple knows that Microsoft is especially vulnerable in 2014 with Windows XP support ending and users fearing the changes of Windows 8 and is being very aggressive in their technical strategy both on the hardware side with the release of a dramatic new desktop device – the black, cylindrical Mac Pro – and the free release (for those on Apple’s hardware, of course) of Mac OSX 10.9. They are pushing hard to get non-Mac users interested in their platform and to get existing users updated and using the latest features. Apple has made huge inroads into Windows territory over the last several years and they know full well that 2014 is they biggest opportunity to take a sizable market chunk all at once. Apple has made their Mac platform a serious contender in the office desktop space and is worth serious consideration. More and more companies are either adding Macs to their strategy or switching to Mac altogether.

The other big player in the room is, of course, Linux. It is easy to make the proclamation that 2014 will be the “Year of the Linux Desktop” which, of course, it is not. However, Linux is a powerful, mature option for the business desktop and with the industry’s steady move to enterprise web-based applications the previous prohibitions against Linux have significantly faded. Linux is a strong contender today if you can get it in the door. Cost effective and easy to support Linux’s armor’s chink is the large number of confusing distros and desktop options. Linux is hardly going to take the desktop world by storm but the next five months do offer one of the best time periods to demo and trial some Linux options to see if they are viable in your business. In preparation for the likely market surge that Linux will feel most of the key Linux desktop players, Suse, Ubuntu and Mint, have released big updates in the last several weeks giving those looking to discover Linux for the first time (or for the first time in a long time) something especially tempting to discover. The Mint project has especially taken the bull by the horns in recent years and introduced the Mate and Cinnamon desktops which are especially appealing to users looking for a Windows 7-esque desktop experience with a forward looking agenda.

Also in the Linux family but decidedly its own animal, Google’s ChromeOS is an interesting consideration for a company interested in a change. ChromeOS is, most likely, the most niche of the desktop options but a very special one. ChromeOS takes the tack that a business can run completely via web interfaces with all applications being written to be accessed in this manner. And indeed, many businesses are approaching this point today but few have made it completely. ChromeOS requires a dramatic rethinking of security and application architectures for a normal business and so will not be seeing heavy adoption but for those unique businesses capable of leveraging it, it can be a powerful and extremely cost effective option.

Of course, an entire new category of options has appeared in recent years as well – the mobile platforms. These existed when Windows XP released but they were not ready to, in any way, replace existing desktops. But during the Windows XP era the mobile platforms grew significantly in computational power and the operating systems that power them, predominantly Apple iOS and Google Android, have come into existence and become the most important players in the end user device space.

iOS and Android, and to a lesser extent Windows Phone and Windows RT, have reinvented the mobile platform into a key communications, productivity and entertainment platform rivaling the traditional desktop. Larger mobile devices, such as the iPad, are widely displacing laptops in many places and, while different, often provide overlapping functionality. It is becoming more and more common to see an iOS or Android device being used for non-intensive computing applications that traditionally belonged to desktop or laptop devices. Mobile platforms are hard to imagine being able to be the sole computing platform of a business over the next few years but it is possible that we will see this begin to happen in fringe case businesses during this product cycle.

Of course, any talk of the desktop future must take into account changes not just in products but in architectures. Marketing around VDI (Virtual Desktop Infrastructure) has propelled virtualized and centralized computing architectures into the forefront and with the the concept of hosted or “cloud” desktop offerings (including Desktop as a Service.) While still nascent the category of “pay by the hour” utility desktop computing will likely grow over the next several years.

Of course, with so many changes coming there is a different problem that will be facing businesses. For the past two decades just about any business could safely assume that nearly all of its employees would have a Windows computer at home where they would become accustomed to any current interface and possibly much of the software that they would use on a day to day basis. But this has changed. Increasingly iOS and Android are the only devices that people have at home and for those with traditional computers keeping current Windows is less and less common while Mac OSX and Linux are on the rise. One of the key driving forces making Windows cost effective, that is a lack of training necessary, may swing from being in its favor to working actively against it.

Perhaps the biggest change that I anticipate in the next desktop cycle is not that of a new desktop choice but of a move to more heterogeneous desktop networks where many different OSes, processor architectures and deployment styles co-exist. As BYOD proliferates and support of different device types becomes necessary and as user experience changes and business apps move to web platforms the advantages of a disparate “choose the device for the task or user” strategy will become more and more common. Businesses will be free to explore their options and choose more freely based on their unique needs.

The era of desktop lock-in are over. Whether because of market momentum or existing user experience or application limitations – the reasons that kept business tightly coupled to the Windows platform are fading quickly. The future offers a landscape of choices both in what we deploy but also in how we deploy it.

Share

Doing IT at Home: Ticketing and Monitoring

Treating your home network more like a business network is often far easier than people realize and far more useful too.  There is a lot of utility is how businesses run their IT departments and it is often only oversight or social custom that keep us from doing more IT at home.

In this third installment of me “Doing IT at Home” series of articles, I’m going to look at ticketing and monitoring systems.  Home networks generally consist of end user workstations, that is desktops, laptops, tablets and the like.  Servers are a rarity although, if you are following with this series, perhaps they are common in your home.

Rarely are home networks monitored in any way.  This is a major differentiator between common home and business networks.  This is a great place to add functionality and value to your home network.  Monitoring does not have to be hard nor expensive.  You can almost certainly run your monitoring from some hardware device that you already have in your home such as an existing Linux, Solaris or FreeBSD virtual machine or a Windows desktop, as examples.  There are many free, business network monitoring solutions such as Spiceworks, Zenoss, Nagios and Zabbix.  Implementing one or more of these, or one of many others, in your home can be very beneficial and educational.

For most IT pros looking to expand their home solution set, Spiceworks is the most obvious choice.  Effectively everyone has Windows at home, even if only in desktop form.  And that is all that it takes to run Spiceworks, so Spiceworks is a great starting point for nearly everyone as a first monitoring platform at home, and as it is geared towards desktop and small business management it is very well suited for the types of systems and environments likely to be found in a home.

Spiceworks is especially valuable for a project such as this because it delivers both the monitoring and alerting component as well as a built in helpdesk component killing two birds with one stone and that is why I included both concepts together in one article.  They could easily be done separately and helpdesk functionality is easily found in an externally hosted service but in using Spiceworks you have an opportunity to put both on the home network as the goal is experience, not practicality.

Getting your home network monitoring in tip top shape is a great learning exercise.  Learning not only how the specific monitoring product works but also learning about networking, operating system specifics, network monitoring protocols (such as SNMP) and more.  Many IT pros find that a good monitoring package causes them to learn more about their internal DNS than they ever thought that they would need to know.

Using ticketing at home encourages and shows organization and is useful in presenting important concepts in IT management.  Having tickets at home can be very handy in maintaining change management for your home network, organizing tasks that need to be done or you plan to tackle in the future and is especially useful if you support a large family environment where you want family members to be able to submit and track their requests.  This gets even handier if you are doing this for an extended family network and you are supported more than just your own household.  This may sound a little silly to do for a home environment, but remember, the goal is to learn products and processes, not to be particularly productive for a home environment.

Like many good home IT projects, this is one that helps to add “life” to your network.  Too many projects result in unused systems that sit idle and quality as a project but serve no actual purpose once implemented.  Monitoring, alerting and ticketing are systems that will actually interact with your network and serve an ongoing purpose which makes them ideal for educational projects.  You’ll not only implement them but maintain them performing updates and possibly extending them with additional functionality.

A good home IT project will add value to your home as well as your portfolio of experience and, hopefully, will demonstrate end to end experience not only as an implementer of a system but as a maintainer and as an end user of that system – a well rounded level of experience often lacking in those who only implement or utilize systems in an enterprise environment.

Share

Contract to Hire

There are so many horrible hiring practices commonly used today one hardly knows where to begin. One of the most obviously poor is the concept of “Contract to Hire” positions.  The concept is simple: You hire someone on a temporary contract and, if all things work out well, you hire them on as a full time employee.  The idea being that the firm can “test drive” the employee for six months and make a sound hiring decision.  Maybe on the surface this feels like a huge win for the employer and the employee, of course, gets to test drive the employer.

But if we look at the idea from an employee’s perspective the misguided nature of this approach becomes very obvious.

There are essentially two types of workers: consultants and traditional employees.  At least people who desire to be one or the other.  Consulting is a unique way of working and a certain percentage of professionals prefer it.  Most workers want to be employees with all of the stability and benefits that that implies.  Very few people desire to be consultants.  It is a very high stress way to work.  This doesn’t mean that people don’t change over time, it is common for young professionals to like consultant work and desire to change to full employment at some later point in their careers.

The description above shows the problem with the contract to hire approach.  Which person do you hire?  The thought is that you will hire the later, the person wanting to be a long term employee with good stability and benefits who seeks out the contract to hire position in the hopes of becoming an employee.  One problem there, however, and that is that people who want to be stable employees don’t want to contract first.  Everyone knows that contract to hire means “contract with little to no chance of being hired afterwards.”  So workers seeking regular employment will only turn to contract to hire positions if they are unable to find regular employment leaving the employer with a strategy of only hiring those that are failing to find work anywhere else – a weak strategy at best.

The other risk is that consultants will take contract to hire jobs.  In these cases, the consultants take the position with no intention of accepting an offer at the end of the contract.  The company may spend six months or even a year training, testing, nurturing and convincing the consultant to love their job and when it comes time to hire them, they get declined.

There is no positive scenario for the contract to hire.  At best you hire a consultant who’s work style opinions are changed by the amazing nature of the work environment and magically they don’t get restless after accepting a position.  But this is far more rare than contract to hires actually attempting to hire someone at the end of the contract.  In the real world a company engaging in this practice is reduced to either hiring the least hireable or consultants with little to no intention of entertaining the “bait” offer.  This also leave the company with the belief that it has a carrot to dangle in front of the consultant when they in fact have nothing special to offer.

It is a case of an employer looking to take advantage of the market but, without thinking it through clearly, is setting themselves up to be taken advantage of.  The best employee candidates will bypass them completely and full time consultants will see the opportunity to strike.

The idea is simple and applies to all hiring.  Don’t hire a person based on factors that don’t apply to the actual job.  Hire employee types for employee positions, consultants for contract positions.  The same as you would not attempt to interview engineers for marketing positions – in theory someone has crossover skills but you eliminate almost any chance of every finding the right person.  Hire honestly to meet your needs and many problems will be eliminated.

Share

Doing IT at Home: Good Documentation

One of the most rewarding home IT projects that I have done was to implement a system for “home documentation.”  In a business environment documentation is critical to nearly any process or department.  At home, documentation is critical too but often overlooked or approached from a completely different perspective than it is in a business, but there is no need for this.  Many people resort to special tools, iPhone apps or physical pen & paper notepads to address documenting things around the house.  I propose something far more enterprise and elegant.  A wiki.

Wikis have been around for some time now and nearly everyone is familiar with their use.  At its core a wiki is just a web-based application.  Wikis come in many shapes and forms and with varying degrees of complexity and run on different platforms.  This makes them very flexible and applicable to nearly anyone, regardless of what kind of systems you run at home.

Using a wiki for home use becomes very obvious quite quickly once the project is underway.  Documenting bills, accounts, purchases, home repairs, part numbers, service schedules, insurance information and your home network, of course, all make perfect sense and are easy to do.  The wiki does not need to be large, just big enough to be useful.  Mine is certainly not sprawling but all of my important data is housed in one, convenient place and is text searchable.  So even if I don’t know how I organized something, I can just search on it.  All of my important data is there, in a single place, so that I can look it up when needed and, more importantly, my wife can look it up and update it when needed.  It allows for simple, reliable collaboration.  And I make mine available from inside or outside the home, so I can access my information from work or while traveling.  That’s a functionality that traditional home documentation systems lack.

While there are many wikis available today, I will mention three that make the most sense for the vast majority of people.  These are DokuWiki, MediaWiki and SharePoint from Microsoft.  DokuWiki and MediaWiki have the advantage of running on UNIX so can be deployed in a variety of situations for low or no cost.  They are free themselves. DokuWiki shines in that it needs no database and uses nothing but the filesystem making it incredible simple to deploy, manage, backup and restore.  It is nothing more than a set of text files and a small PHP application that writes them.  MediaWiki is, by far, the most popular wiki option and, like DokuWiki, is an PHP application but is backed by a database, normally MySQL, making it more complex but giving it more power as well.  Many people would choose MediaWiki to use for home (as do I) because it provides the most direct experience for the largest number of businesses.  SharePoint is free if you have a Windows Server and is much more complex than the pure wiki options.  SharePoint is an entire application platform that also includes a wiki as a part of its core functionality.  If you are looking to move more heavily into the Microsoft ecosystem then using SharePoint would likely make the most sense and will provide a lot of additional functionality like calendaring and document storage too.

Running a wiki can help give meaning to a home web server.  Instead of sitting idle it can house important applications and really be used regularly.  While not a massive project having a wiki at home could be an important step to giving meaning to the home IT environment.  IT at home often suffers from lacking direction or purpose – implementing systems only like a lab and lacking real world use.  Like the PBX example in an earlier article, a home documentation wiki can give your network meaning and purpose.

Share

Doing IT at Home: The Home PBX

I am often asked what projects I would recommend for someone to do at home to get more IT experience and I am often at a loss to come up with anything very interesting that is both educational and could actually prove practical in a daily use kind of way.  Having home IT projects that are actually used, day in and day out, really changes how projects are approached making them a little bit more like production systems with real users using them, performance mattering and ongoing management an important consideration.  Over the years I have discovered a few home IT projects that really make sense in a “more than just a lab for learning purposes” kind of way.  One of the best is running your own PBX to replace your home telephone.

Today, home telephones are becoming less and less common, partially because their traditional functionality has been widely displaced by mobile phones and partially because the legacy telephone system, even when delivered over VoIP, is rather archaic.  But in business, telephony is taking off as modern VoIP PBXs add new functionality and lower costs.  This is one place where treating your home like a business can really pay off.  People who have moved to mobile phones only will likely have noticed a few problems with that model.

Why mobile phones don’t replace home phones?

  • Mobile phones are attached to a person rather than a place.  The concepts behind using each are different.  Reaching a person is far more useful, but both have their uses and special functions.
  • Mobile phones are highly dynamic.  They turn on and off, they roam, they leave the country, they lose signal, they lose power, they get lost.  Home phones are highly static in comparison.
  • Mobile phones require one line per person, a home phone can provide many extensions from one line or number.
  • Home phones systems can offer redundancy or failover.
  • Home phones can be used remotely, over the Internet, from anywhere without needing to arrange international calling ahead of time, or at all.
  • Home phones can offer features like conference rooms, ring groups, queues, etc.

Building a PBX at home can be very low cost while providing a lot of functionality that traditional phones and mobile phones fail to provide.  I, myself, am very glad that I have a home phone still but was disappointed that I was paying so much for such limited functionality using a traditional carrier.  Even after moving to a pure VoIP carrier I was still paying more for my phone at home than the office paid for multiple business lines.  And an idea was born.

There is always more than one way to skin a cat and there are many PBX products that one could use for a home project of this nature.  Far and away, though, the most popular will be a flavor of Asterisk, the free, open source, enterprise voice switching system.  And within the Asterisk family, Elastix is the obvious choice for a project of this nature.   Not only does this give a good opportunity for learning a very popular telephony system but a good use for production management of CentOS (Red Hat) Linux as well.  Another option would be 3CX on Windows, for example, but this is more limited and requires more licensing but depending on your career path this may make equal or better sense for you.

Having a true, enterprise PBX in your home can serve many needs, all of which play wonderfully into expanding a professional portfolio and as running a home PBX remains rather an exclusive endeavor it is an ideal talking point for an interview.  Having a PBX means that all of the control usually reserved for a business is now available at home such as having extensions for each family member (kids want their own lines, no problem), conference room(s) for family meetings (a la Skype but easier, especially for family members dialing in from traditional phones or mobile phones,) ring and hunt groups for handling complex calling situations (just the parents, or just the kids,) flexible voicemail options, detailed call reporting, household paging systems, extension to extension calling, remote extensions (whether for family members when they are out of the house or extended family who just want an extension on the system for unlimited, free calling around the family), video phones, overhead paging (front door announcement system, perhaps,) and multiple shared lines for easy efficiency.  All of this for almost no cost.

A PBX is a great resource to be virtualized, especially if you are running Linux.  A PBX uses effectively no resources when idle and very little when active, even with several users.  This will easily be as small as the smallest web server that you are running at home.  And almost no storage is needed, only just enough to hold the voicemails and logs.  Ten years ago only paravirtualization could handle the needs of audio processing limiting you to Xen-based virtualization products only.  Today vSphere and HyperV join XenServer in being able to handle this workload without breaking a sweat (others will work too.)  So whatever virtualization you are using at home will work just fine (you may run into issues if you are using Type 2 virtualization like VirtualBox.)

The only actual expense for a home PBX, and truly even for a small business, is the cost of the trunk that brings in the connection to the public switched telephone service (the thing that provides the phone number.)  A typical home telephone service might cost $20 – $50 / month, even without a single call being made and no services more than a simple phone line, even when using VoIP.  There are some exceptions, but very few.  For my own home PBX project I selected a commercial VoIP carrier that gives me four lines in a single SIP trunk for $11/mo – everything included like unlimited incoming minutes, the DID (the phone number) and the only thing that is extra is outgoing minutes, which are super cheap.  My phone bill rarely tops $13!  That’s pretty amazing considering I turned off a single line $35/mo service and now have all of those features of a PBX and a pretty amazing talking point.

If you are looking for an interesting project that will do wonders for your resume while actually adding some practical value to your home a PBX may be a great place to start.

Share

Replicated Local Storage

With the increased exposure of virtualization and the popularization of platform-level high availability solutions because of it the need and awareness of high availability storage has come to the forefront of all of IT and the SMB realm in particular.  Storage has become, not surprisingly, the most challenging aspect of virtualization today.

Most people investigating high availability storage solutions are aware of replication between SAN or NAS devices but are not aware that local storage can be replicated synchronously as well allowing for the same high availability practices without the need for external storage devices.  In fact, Replicated Local Storage (or RLS) is (and must logically be) the same technology used by a SAN or NAS to achieve high availability.  RLS is the underpinning of all high availability storage solutions, it is simply that we only refer to it by this name when we are looking at a device as being “local.”  If we were working on a SAN or a NAS then RLS would refer to its own replication technology.  When looking at a server connected to a replicated SAN we think of that replication as being non-local.  Local is a matter of current perspective.  At a technical layer all replication is RLS at the end of the day.

RLS technologies are popular are certain operating systems such as Linux where DRBD is native and accepted into the kernel.  The FreeBSD project has, in recent years, introduced its own native RLS technology known as HAST.  Windows does not have a native RLS option today.  Linux and FreeBSD lead the RLS charge in regards to common operating systems used in the SMB and are driving the industry forward with broader adoption of these technologies.

In virtualization we see many other approaches taken to provide RLS for virtualization platforms.  KVM, which is built on Linux, and the Xen family (including Xen, XenServer and others) which relies on Linux leverage DRBD for their own RLS.  The VMware ecosystem uses a replicated VSA approach with popular options being VMware’s own VSA product and HP’s VSA product.  Both of which use a virtualized, replicated NAS appliance to provide RLS to the platform.  On Microsoft’s HyperV the same is accomplished by the use of Starwind’s replicated SAN platform that behaves, essentials, the same as a VSA.

RLS is rapidly becoming more and more important as it scales well in small scale virtualization taking what has long been available as a niche clustering technology and pushing it into the mainstream.  Before high availability for virtualization was popularized in the SMB world these technologies were almost exclusively used for small scale UNIX high availability clustering.  They were important technologies and often used but received little industry attention as they were an “under the hood” detail of some UNIX systems.  Today, with the rapid uptake of high availability for virtualization, RLS has gone from a niche technology to one of the most key and appropriate technologies for nearly any SMB wishing to achieve high availability for their virtualization platforms.

 

Share