Tag Archives: windows

No One Ever Got Fired For Buying…

It was the 1980s when I first heard this phrase in IT and it was “no one ever got fired for buying IBM.”  The idea was that IBM was so well known, trusted and reliable that it was the safe choice as a vendor for a technology decision maker to select.  As long as you chose IBM, you were not going to get in trouble, no matter how costly or effective the resulting solution turned out to be.

The statement on its own feels like a simple one.  It makes for an excellent marketing message and IBM, understandably, loved it.  But it is what is implied by the message that causes so much concern.

First, we need to understand what the role of the IT decision maker in question is.  This might sound simple, but it is surprising how easily it can be overlooked.  Once we delve into the ramifications of the statement itself, it is far too easy to lose track of the real goals. In the role of a decision maker, the IT professional is tasked with selecting the best solution for their organization based on its ability to meet organizational goals (normally profits).  This means evaluating options, shielding non-technical management from sales people and marketing, understanding the marketplace, research and careful evaluation.  These things seem obvious, until we begin to put things into practice.

What we have to then analyze is not that “no one ever got fired for choosing product X”, but what the ramifications of such a statement actually are.

First, the statement implies an organization that is going to judge IT decision making not on its merits or applicability but on the brand name recognition of the product maker.  In order for a statement like this to have any truth behind it, it requires the entire organization to either lack the ability or desire to evaluate decisions but also an organizational desire to see large, expensive brand names (the statement is always made in conjunction with extremely high cost items compared to the alternatives) over other alternatives.  An organizational preference towards expensive, harder to justify spends is a dangerous one at best.  We assume that not only does buying the most expensive, most famous products will be judged well compared to less expensive or less well known ones, but that buying products is seen as beneficial to not buying products; even though often the best IT decisions are to not buy things when no need exists.  Prioritizing spending over savings for their own reasons without consideration for the business need is very bad, indeed.

Second, now that we realize the organizational reality that this implies, that the IT decision maker is willing to seize this opportunity to leverage corporate politics as a means of avoiding taking the time and effort to make a true assessment of needs for the business but rather skip this process, possibly completely, we have a strong question of ethics.  Essentially, whether out of fear of the organization not properly evaluating the results  or by blaming the decision maker for unforeseeable events after the fact or of looking to take advantage of the situation to be paid for a job that was not done, we have a significant problem either individually, organizationally, or both.

For any IT decision maker to use this mindset, one that there is safety in a given decision regardless of suitability, there has to be a fundamental distrust of the organization.  Whether this is true of the organization or not is not known, but that the IT decision maker believes it is required for such a thought to even exist.  In many organizations it is understandable that politics trump good decision making and it is far more important to make decisions for which you cannot be blamed rather than trying honestly to do a good job.  That is sad enough on its own, but so often it is simply an opportunity to skip the very job for which the IT decision maker is hired and paid and instead of doing a difficult job that requires deep business and technical knowledge, market research, cost analysis and more – simply allowing a vendor to sell whatever they want to the business.

At best, it would seem, we have an IT decision maker with little to no faith in the ethics or capabilities of those above them in the organization.  At worst we have someone actively attempting to take advantage of a business by being paid to be a key decision maker while, instead of doing the job for which they are hired or even doing nothing at all, actively putting their weight behind a vendor that was not properly evaluated based possibly solely on not needing to do any of the work themselves.

What should worry an organization is not that vendors that could often be considered “safe” get recommended or selected, but rather why they were selected.  Vendors that fall into this category often offer many great products and solutions or they would not earn this reputation in the first place.  But likewise, after gaining such a reputation those same vendors have a strong financial incentive to take advantage of this culture and charge more while delivering less as they are not being selected, in many cases, on their merits but instead on their name, reputation or marketing prowess.

How does an organization address this effect?  There are two ways.  One is to evaluate all decisions carefully in a post mortem structure to understand what good decisions look like and not limit post mortems to obviously failed projects.  The second is to look more critical, rather than less critically, at popular product and solution decisions as these are red flags that decision making may be being skipped or undertaken with less than the appropriate rigor.  Popular companies, assumed standard approaches, solutions found commonly in advertising or commonly recommended by sales people, resellers, and vendors should be looked at with a discerning eye, moreso than less common, more politically “risky” choices.


Is it Time to Move to Windows 8

Microsoft’s latest desktop reboot is out in the wild and lots of people are getting their hands on it and using it today.  Is it time to consider moving to Windows 8?  Absolutely.

That doesn’t mean that Windows 8 should be your main desktop later this afternoon, but considering a move to Windows 8 is important to do early.  It is a popular approach to hold off on new software updates until systems have been in production use for months or years and there is value to this concept – allowing others to vet, test and uncover issues while you sit back and remain stable on existing, well known software.  But there is a reason why so many businesses forge ahead and that is because using software early delivers the latest features and advantages as early as possible.

Unlike software coming from a small company with limited support and testing resources, Microsoft’s software is incredibly well tested both internally and by the community before it is available to end users.  Few software is more heavily vetted prior to release.  That doesn’t mean that release day rollouts are wise but beginning to evaluate new products early can have major advantages both because of the newest features are available to those that decide to use the new product but also the most time to find an alternative for those decide to migrate away.  Early decision making is important to success.

The reality is, that while many businesses should take the time to evaluate Windows 8 versus alternative solutions – a practice that should be done regularly regardless of new features or changes to environments to ensure that traditional choices remain the best current choices, nearly all businesses today will be migrating to Windows 8 and remaining in the Microsoft ecosystem for quite some time to come.

This means that many companies should be looking to make the jump to Windows 8 sooner, rather than later.  Windows 8, while seemingly shockingly new and innovative, is based on the same Windows NT 6 family kernel that began with Windows Vista and Windows Server 2008 and continued through the Windows 7 and Windows Server 2008 R2 era and is shared with Windows Server 2012.  This kernel is mature and robust and the vast majority of the code and features in Windows 8, user interface aside, are well tested and extremely stable.  Windows 8 uses fewer resources, on the same hardware, as Windows 7 which, in turn, was lighter and faster than Windows Vista.  The sooner that you move to Windows 8, the sooner you get more performance out of your existing hardware and the longer you have to leverage that advantage.

Windows 8 brings some big changes that will impact the end users, without a doubt.  These changes can be, in some cases, quite disruptive but with proper training and preparation users should return to regular productivity levels in a reasonable amount of time and often will be more productive once they are comfortable with the new environment and features.  Those that do not fall into one of these two categories are the smaller, niche user group that are prime candidates for moving to a completely different ecosystem where their needs can be more easily met.

If you are an organization destined to be running Windows 8, or its successors, “someday” then most likely you should be running Windows 8 today to start leveraging its advantages as soon as possible so that you can use them as long as possible.  If Windows truly is the platform that is best for you you should embrace it and accept the “hit” of transitioning to Windows 8 now, swallow that bitter pill and be done with it, and for the next several years while your competitors are whining about having to move to Windows 8 “someday” you will be happily leveraging your older hardware, your more efficient workflows and your more modern systems day after day, reaping the benefits of an early migration to a stable platform.

It is common for IT departments to take a “wait and see” approach to new system migrations.  I am convinced that this is created by a culture of hoping that IT staff will leave their current positions before a migration occurs and that they will land a new position elsewhere where they have already migrated.  Or perhaps they hope to avoid the migration completely awaiting a later version of Windows.  This second argument does carry some weight as many shops skip operating system revisions but doing so often brings extra overhead in security issues, application compatibility effort and other issues.

Windows 8 is unique in that it is a third release of the Windows NT 6 kernel series so it comes as a rare, very stable late release member of its family (the NT 6 family is sometimes called the “Vista Family.”)  Windows 8’s NT designation is 6.2.  The only other Microsoft NT operating system to reach the x.2 status was when Windows XP SP3 and Server 2003 R2 released with the NT 5.2 kernel – a part of the Windows 2000 family.  Late release kernels are important because they tend to deliver the utmost in reliability and represent an excellent point in which to invest in a very long term deployment strategy that can last for nearly a decade.

Whether you agree with Microsoft’s unified platform vision or the radical approach to user interface included in Windows 8, you need to decide if you are continuing down the path of the Microsoft platform and if so, embrace it rather than fight it and begin evaluating if a move to Windows 8 and, by extension, Windows Server 2012 are right for you.  Don’t avoid Windows 8, it isn’t going to go away.  For most shops making the decision to move today will sow the seeds for long term benefits that you can reap for years and years to come.


What Windows 8 Means for the Datacenter

Talk around Microsoft’s new upcoming desktop operating, Windows 8, centers almost completely on its dramatically departing Metro User Interface, borrowed from the Windows Phone which, in turn, borrowed it from the ill-fated Microsoft Zune.  Apparently Microsoft believes that the third time is the charm when it comes to Metro.

To me the compelling story of Windows 8 comes not in the fit and finish but the under the hood rewiring that hint at a promising new future for the platform.  In the past Microsoft has attempted shipping Windows Server OS on some alternative architectures including, for those who remember, the Digital Alpha processor and more recently the Intel Itanium.  In these previous cases, the focus was on the highest end Microsoft platforms being run on hardware above and beyond what the Windows world normally sees.

Windows 8 promises to tackle the world of multiple architectures in a completely different way – starting with the lowest end operating system and focusing on a platform that is lighter and less powerful than the typical Intel or AMD offering, the low power ARM RISC architecture with the newly named Windows RT (previously WoA, Windows on ARM.)

The ARM architecture is making its headlines as Microsoft attempts to drive deep into handheld and low power devices.  Windows RT could signal a unification between the Windows desktop codebase and the mobile smartphone codebase down the road.  Windows RT could mean strong competition from Microsoft in the handheld tablet market where the iPad dominates so completely today.  Windows RT could be a real competitor to the Android platforms.

Certainly, as it stands today, Windows RT has a lot of potential to be really interesting, if not quite disruptive, with where it will stand upon release.  But I think that the interesting story lies beneath the surface in what Windows RT can potentially mean for the datacenter.  What might Microsoft have in store for us in the future?

The datacenter today is moving in many directions.  Virtualization is one driving factor as are low power server options such as Hewlett-Packard’s Project Moonshot which is designed to bring ARM-based, low power consumption servers into high end, horizontally scaling datacenter applications.

Currently, today, the number of server operating systems available to run on ARM servers, like those coming soon from HP, are few and far between and are mostly only available from the BSD family of operating systems.  The Linux community, for example, is scrambled to assemble even a single, enterprise-supported ARM-based distribution and it appears that Ubuntu will be the first out of the gate there.  But this paucity of server operating systems on ARM leaves an obvious market gap and one that Microsoft may be well thinking of filling.

Windows Server on ARM could be a big win for Microsoft in the datacenter.  A lower cost offering broadening their platform portfolio without the need for heavy kernel reworking since they are already providing this effort for the kernel on their handheld devices.  This could be a significant push for Windows into the growingly popular green datacenter arena where ARM processors are expected to play a central role.

Microsoft has long fought to gain a foothold in the datacenter and today is as comfortable there as anyone but Windows Servers continue to play in a segregated world where email, authentication and some internal applications are housed on Windows platforms but the majority of heavy processing, web hosting, storage and other roles are almost universally given to UNIX family members.  Windows’ availability on the ARM platform could push it to the forefront of options for horizontally scaling server forms like web servers, application servers and other tasks which will rise to the top of the ARM computing pool – possibly even green high performance compute grids.

ARM might mean exciting things for the future of the Windows Server platform, probably at least one, if not two releases out.  And, likewise, Windows might mean something exciting for ARM.

Choosing an Open Storage Operating System

It is becoming increasingly common to forgo traditional, proprietary storage devices, both NAS and SAN, and instead using off the shelf hardware and installing a storage operating system on it for, what many call, “do it yourself” storage servers.  This, of course, is a misnomer since no one calls a normal file server “do it yourself” just because you installed Windows yourself.  Storage has a lot of myth and legend swirling around it and people often panic when the they think of installing Windows and calling it NAS rather than calling it a file server.  So, if it makes you feel better, use terms like file server or storage server rather than NAS and SAN – problem solved.  This is a part of the “open storage” movement – moving storage systems from proprietary to standard.

Choosing the right operating system for a storage server is important and not always that easy.  I work extensively in this space and people often ask me what I recommend and the recommendations vary, based on scenario, and often seem confusing.  But the factors are actually relatively easy, if you just know the limitations that create the choices and paths in the decision tree.

Before choosing an OS we must stop and consider what our needs are going to be.  Some areas that need to be considered are: capacity, performance, ease of administration, budget, connection technology, cost and clustering.  There are two main categories of systems that we will consider as well, standard operating system or storage appliance operating system.  The standard operating systems are Windows, Linux, Solaris and FreeBSD.  The storage appliance operating systems are FreeNAS, OpenFiler and NexentaStor.  There are others in both categories but these are the main players currently.

The first decision to be made is whether or not you or your organization is comfortable supporting a normal operating system operating in a storage server role.  If you are looking at NAS then simply ask yourself if you could administer a file server.  Administrating a block storage server (SAN) is a little more complex or, at least, unusual, so this might induce a small amount of concern but is really in line with other administration tasks.  If the answer is yes, that using normal operating system tools and interfaces is acceptable to you, then simply rule out the “appliance” category right away.  The appliance approach adds complexity and slows development and support cycles, so unless necessary is undesirable.

Storage appliance operating systems exist only to provide a pre-packaged, “easy to use” view into running a storage server.  In concept this is nice, but there are real problems with this method.  The biggest problems come from the packaging process which pulls you a step away from the enterprise OS vendors themselves making your system more fragile, further behind in updates and features and less secure than the traditional OS counterparts.  It also leaves you at the mercy of a very small company for OEM-level support when something goes wrong rather than with a large enterprise vendor with a massive user base and community.  The appliancization process also strips features and options from the systems by necessity.  In the end, you lose.

Appliances are nice because you get a convenient web interface from which “anyone” can administer your storage.  At least in theory.  But in reality there are two concerns.  The first is that there is always a need to drop into the operating system itself and fix things every once in a while.  Having the custom web interface of the appliance makes this dramatically harder than normal so at the time when you most need the appliance nature of the system is when you do not have it.  The second is that making something as critical as storage available for “anyone” to work on is a terrifying thought.  There are few pieces of your infrastructure where you want more experience, planning and care taken than in storage.  Making the system harder to use is not always a bad thing.

If you are in need of the appliance system then primarily you are looking at FreeNAS and OpenFiler.  NexentaStor offers a compelling product but it is not available in a free version and the cost can be onerous.  The freely downloadable version appears to be free for the first 18TB of raw storage but the license states otherwise making this rarely the popular choice.  (The cost of NexentaStor is high enough that purchasing a fully supported Solaris system would be less costly and provides full support from the original vendor rather than Nexenta which is essentially repackaging old versions of Solaris and ZFS.  More modern code and updates are available less expensively from the original source.)

FreeNAS, outside of clustering, is the storage platform of choice in an appliancized package.  It has the much touted ZFS filesystem which gives it flexibility and ease of use lacking in OpenFiler and other Linux-based alternatives.  It also has a working iSCSI implementation so you can use FreeNAS safely as either a NAS or a SAN.  Support for FreeNAS appears to be increasing with new developments being made regularly and features being retained.  FreeNAS offers a large range of features and supported protocols.  It is believed that clustering will be coming to FreeNAS in the future as well as this has recently been added to the underlying FreeBSD operating system.  If so, FreeNAS will completely eliminate the need for OpenFiler in the marketplace.  FreeNAS is completely free.

OpenFiler lacks a reliable iSCSI SAN implementation (unless you pay a fortune to have that part of the system replaced with a working component ) and is far more out of date than its competitors but does offer full block-level real-time replication allowing it to operate in a clustered mode for reliability .  The issue here being that the handy web interface of the NAS appliance does not address this scenario and if you want to do this you will need to get your hands dirty on the command line, very dirty indeed.  This is expert level stuff and anyone capable of even considering a project to make OpenFiler into a reliable cluster will be just as comfortable, and likely far more comfortable, building the entire cluster from scratch on their Linux distribution of choice.  OpenFiler is built on the rather unpopular, and now completely discontinued, rPath Linux using the Conary packaging system both which are niche players, to say the least, in the Linux world.  You’ll find little rPath support from other administrators and many packages and features that you may wish access to are unavailable.  OpenFiler’s singular advantage of any significance is the availability of DRBD for clustering, which as stated above, in nonsensical.  Support for OpenFiler appears to be waning with new features being non-existant and, in fact, key features like the AFP have been dropped rather than new features having been added.  OpenFiler is free but key features, like reliable iSCSI, are not.  Recent reports from OpenFiler users are that even non-iSCSI storage has become unstable in the latest release and losing data is a regular occurrence.  OpenFiler remains very popular in the mindshare of this industry segment but should be avoided completely.

If you do not need to have your storage operating system appliancized then you are left with more and better choices, but a far more complex decision tree.    Unlike the appliance OS market which is filled with potholes (NexentaStor has surprise costs, OpenFiler appears to support iSCSI but causes data loss, features get removed from new versions) all four operating systems mentioned here are extremely robust and feature rich.  Three of them have OEM vendor support which can be a major deciding factor and all have great third party support options far broader than what is available for the appliance market.

The first decision is whether or not Windows only features, notably NTFS ACLs, are needed.  It is common for new NAS users to be surprised when the SMB protocol does not provide all of the granular filesystem control that they are used to in Windows.  This is because those controls are actually handled by the filesystem, not the network protocol, and Windows alone provides these via NTFS.  So if that granular Windows file control is needed, Windows is your only option.

The other three entrants, Linux, Solaris and FreeBSD, all share basic capabilities with the notable exception of clustering.  All have good software RAID, all have powerful and robust filesystems, all have powerful logical volume management and all provide a variety of NAS and SAN connection options.  Many versions of Linux and FreeBSD are available completely freely.  Solaris, while free for testing, is not available for free for production use.

The biggest differentiator between these three OS options is clustering.  Linux has had DRBD for a long time now and this is a robust filesystem clustering technology.  FreeBSD has recently (as of 9.0) added HAST to serve the same purpose.  So, in theory, FreeBSD has the same clustering options as Linux but this is much newer and much less well known.  Solaris lacks filesystem clustering in the base OS and requires commercial add-ons to handle this at this time.

Solaris and FreeBSD share the powerful and battle tested ZFS filesystem.  ZFS is extremely powerful and flexible and has long been the key selling point of these platforms.  Linux’ support for filesystems is more convoluted.  Nearly any Linux distribution (we care principally about RHEL/CentOS, Oracle Unbreakable Linux, Suse/OpenSuse and Ubuntu here) supports EXT4 which is powerful and fast but lacks some of the really nice ZFS features.  However, Linux is rapidly adopting BtrFS which is very competitive with ZFS but is nascent and currently only available in Suse and Oracle Linux distros.  We expect to see it from the others soon for production use but at this time it is still experimental.

Outside of clustering, likely the choice of OS of these three will come down primarily to experience and comfort.  Solaris is generally known for providing the best throughput and FreeBSD the worst.  But all three are quite close.  Once BtrFS is widely available and stable on Linux, Linux will likely become the de facto choice as it has been in the past.

Without external influence, my recommendation for storage platform are FreeBSD and then Linux with Solaris eliminated on the basis that rarely is anyone looking for commercial support and so it is ruled out automatically.  This is based almost entirely on the availability of Copy-on-Write filesystems and assuming no clustering which is not common.  If clustering is needed then Linux first then FreeBSD and Solaris is ruled out, again.

Linux and FreeBSD are rapidly approaching each other in functionality.  As BtrFS matures on Linux and HAST matures on FreeBSD they seem to be meeting in the middle with the choice being little more than a toss up.

There is no single, simple answer.  Choosing a storage OS is all about balancing myriad factors from performance, resources, features, support, stability, etc.  There are a few factors that can be used to rule out many contenders and knowing these hard delimiters is key.  Knowing exactly how you plan to use the system and what factors are important to you are important in weeding through the available options.

Even once you pick a platform there are many decisions to make.  Some platforms include multiple file systems.  There is SAN and NAS.  There are multiple SAN and NAS protocols.  There is network bonding (or teaming, the Windows world.)  There is Multipathing.  There are snapshots, volumes, RAID.  The list goes on and on.