The Home Line

In many years of working with the small and medium business markets I have noticed that the majority of SMB IT shops tend to one of two extremes: massive overspend with an attempt to operate like huge companies by adopting costly and pointless technologies unnecessary at the SMB scale or they go to the opposite extreme spending nothing and running technology that is completely inadequate for their needs.  Of course the best answer is somewhere in between – finding the right technologies, the right investments for the business at hand; and some companies manage to work in that space but far too many go to one of the two extremes.

A tool that I have learned to use over the years is classifying the behavior of a business against decision making that I would use in a residential setting – specifically my own home.  To be sure, I run my home more like a business than does the average IT professional, but I think that it still makes a very important point.  As an IT professional, I understand the value of the technologies that I deploy, I understand where investing time and effort will pay off, and I understand the long term costs of different options.  So where I make judgement calls at home is very telling.  My home does not have the financial value of a functional business nor does it have the security concerns, nor the need to scale (my family will never grow in user base size, no matter how financial successful it is) so when comparing my home to a business, my home should, in theory, set the absolute lowest possible bar in regards to financial benefit of technology investment.  That is to say, that the weighing of options for an actual, functional business should always lean towards equal or more investment in performance, safety, reliability and ease of management than my home.  My home should be no more “enterprise” or “business class” than any real business.

One could argue, of course, that I make poor financial decisions in my home and over-invest there for myriad reasons and, of course, there is merit to that concern.  But realistically there are broad standards that IT professionals mostly agree upon as good guidelines and while many do not follow these at home, either through a need to cut costs, a lack of IT needs at home or, as is often the case, a lack of buy in from critical stakeholders (e.g. a spouse), most agree as to which ones make sense, when they make sense and why.  The general guideline as to what technology at which price points set the absolute minimum bar are by and large accepted and constitute what I refer to as the “home line.”  The line, below which, a business cannot argue that it is acting like a business but is, at best, acting like a consumer, hobbyist or worse.  A true business should never fall below the home line, doing so would mean that they consider the value of their information technology investment in their business to be lower than what I consider my investment at home to be.

This adds a further complication.  At home there is little cost to the implementation of technologies.  But in a business all of the time spent working on technology, and supporting less than ideal decisions, is costly.  Either costly in direct dollars spent, often because IT support is being provided by a third party doing so on a contractual basis, or costly because time and effort are being expended on basic technology support that could be being used elsewhere – the cost of lost opportunity.  Neither of these take into account things like the cost of downtime, data loss or data breach which are generally the more significant costs that we have to consider.

The cost of the IT support involved is a significant factor.  For a business, there should be a powerful leaning towards technologies that are robust and reliable with a lower total cost of ownership or a clear return on investment.  In a home there is more reason to spend more time tweaking products to get them to work, working with products that fail often or require lots of manual support, using products that lack powerful remote management options or products that lack centralized controls for user and system management.

It is also important to look at the IT expenditures of any business and ask if the IT support is thus warranted in the light of those investments.  If a business is unwilling to invest into the IT infrastructure an equivalent amount that I would invest into the same infrastructure for home use, why would a business be willing to maintain an IT staff, at great expense, to maintain that infrastructure?  This is a strange expenditure mismatch but one that commonly arises.  A business which has little need of full time IT support will often readily hire a full time IT employee but be unwilling to invest in the technology infrastructure that said employee is intended to support.  There seems to be a correlation between businesses that underspend on infrastructure with those that overspend on support – however a simple reason for that could be that staff in that situation is the most vocal.  Businesses with adequate staff and investment have little reason for staff to complain and those with no staff have no one to do the complaining.

For businesses making these kinds of tradeoffs, with only the rarest of exceptions, it would make far better financial and business sense to not have full time IT support in house and instead move to occasional outside assistance or a managed services agreement at a fraction of the cost of a full time person and invest a portion of the difference into the actual infrastructure.  This should provide far more IT functionality for less money and at lower risk.

I find that the home line is an all around handy tool.  Just a rough gauge for explaining to business people where their decisions fall in relation to other businesses or, in this case, non-businesses.  It is easy to say that someone is “not running their business like a business” but this adds weight and clarity to that sentiment.  That a business is not investing like another business up the street may not matter at all.  But if they are not putting as much into their business as the person that they are asking for advice puts into their home, that has a tendency to get their attention.  Even if, at this point, the decisions to improve the business infrastructure become primarily driven by emotion, the outcome can be very positive.

Comparing one business to another can result in simple excuses like “they are not as thrifty” or “that is a larger business” or “that is a kind of business that needs more computers.”  It is rarely useful for business people or IT people to do that kind of comparison.  But comparing to a single user or single family at home there is a much more corporeal comparison.  Owners and managers tend to take a certain pride in their businesses and having it be widely seen that they see their own company’s value as lower than that of a single household is non-trivial.  Most owners or CEOs would be ashamed if their own technology needs did not exceed those of an individual IT professional let alone theirs plus all of the needs of the entire business that they oversee.  Few people want to think of their entire company as being less than the business value of an individual.

This all, of course, brings up the obvious questions of what are some of the things that I use at home on my network?  I will provide some quick examples.

I do not use ISP supplied networking equipment, for many reasons.  I use a business class router and firewall unit that does not have integrated wireless nor a switch.  I have a separate switch to handle the physical cabling plant of the house.  I use a dedicated, managed, wireless access point.  I have CAT5e or CAT6 professionally wired into the walls of the house so that wireless is only used when needed, not as a default for more robust and reliable networking (most rooms have many network drops for flexibility and to support multimedia systems.)  I use a centrally managed anti-virus solution, I monitor my patch management and I never run under an administrator level account.  I have a business class NAS device with large capacity drives and RAID for storing media and backups in the house.  I have a backup service.  I use enterprise class cloud storage and applications.  My operating systems are all completely up to date.  I use large, moderate quality monitors and have a minimum of two per desktop.  I use desktops for stationary work and laptops for mobile work.  I have remote access solutions for every machine so that I can access anything from anywhere at any time.  I have all of my equipment on UPS.  I have even been known to rackmount the equipment in the house to keep things neater and easier to manage.  All of the cables in the attic are carefully strung on J-hooks to keep them neat.  I have VoIP telephony with extensions for different family members.  All of my computers are commercial grade, not consumer.

My home is more than just my residential network, it is an example of how easy and practical it is to do infrastructure well, even on a small scale.  It pays for itself in reliability and often the cost of the components that I use are far less than that of the consumer equipment often used by small businesses because I research more carefully what I purchase rather than buying whatever strikes my fancy in the moment at a consumer electronics store.  It is not uncommon for me to spend half as much for quality equipment as many small businesses spend for consumer grade equipment.

Look at the businesses that you support or even, in fact, your own business.  Are you keeping ahead of the “home line?”  Are you setting the bar for the quality of your business infrastructure high enough?

Originally published on the StorageCraft Blog.

Should IT Embrace Subscription Licensing

With big name, traditionally boxed products like Microsoft Office and Adobe’s Creative Suite turning to new subscription licensing models we, as IT, have to look into this model and determine if and when it is right for our businesses.  In some cases, like with MS Office, we have choices to buy boxed products, volume license deals or subscription licenses.  This is very flexible and allows us to consider many alternatives.  With Adobe, however, non-subscription options have been dropped and if we want to use their product line subscription pricing is our only option.  As we move forward this will be a trend more and more and something that all of the industry must face and understand.  It cannot be avoided easily.

First we should understand why subscription models are good for the vendors.  Many people, especially in IT, assume that subscriptions are designed to extract higher fees from customers and certainly any given vendor may raise prices in conjunction with changing models, but fundamentally subscription pricing is purely a licensing approach and does not imply and increase in cost.  It may, potentially, even mean a decrease.

Software vendors like subscription pricing for three key reasons.

The first is license management.  With traditional software purchases it was trivially easy for customers to install multiple copies, perhaps accidentally, of software causing a loss of revenue if software was used but not licensed.  License management was traditionally complicated and expensive for all parties involved.   Moving to subscription models makes it very easy to clearly communicate licensing requirements and to enforce policies.

For customers purchasing software, this change is actually beneficial as it lowers the overall cost of software because it helps to eliminate illegitimate uses of software.  By lowering the piracy rate the cost that needs to be passed on to legitimate businesses can be lowered.  Whether this turns into lower cost for customers or higher margins for vendors it is a benefit to all of the legitimate parties involved.

The second is eliminating legacy versions from support.  In traditional software and support models, customers might use old versions of software for many years resulting in many different versions requiring support simultaneously.  Often this would mean that support teams would need extensive training for a long tail of legacy customers or separate support groups would be needed for different software versions.  This was extremely expensive as support is a key cost in software development.  Likewise, development teams would be forced to be split with most resources focusing on developing or fixing the current software version while some developers would be forced to spend time patching and maintaining legacy versions that were no longer being sold.  These costs were often enormous and meant that great energy was being spent to support customers who were not investing in new software and came at the expense of resources for improving the software and support for the best customers.  The move to subscription licensing generally eliminates support needs for legacy versions as all customers move to the latest versions all of the time.

Again, this is a move that greatly benefits both the vendor and good customers.  It only sometimes is a negative to customers who were relying on being “expensive to maintain” customers who used old software for a long time rather than updating.  But commonly even those customers benefit from not running old software, even if this is not how they would operate if they had their druthers.  The benefits to the vendor and to “good” customers is very large, the penalty to customers that were formally not profitable is generally very small.

The third reason, which is really a combination of the above, is that customers who previously depended on buying a single version of a product and continuing to use it for a very long time, likely many years past the end of support, are effectively eliminated.  These customers, lacking a means to buy in this traditional manner, are normally either lost as customers (which is not a financial loss as they were not very profitable) or they convert to higher profit customers, even if begrudgingly.  This makes vendors very happy – separating the wheat from the chaff, so to speak.  Cutting lose customers that were not making them money and creating more customers that are making them money.

Now that we have seen why vendors like this model and why we are likely to see more and more of it in the future as large, leading vendors both demonstate the financial value of the change and condition customers to think in terms of subscription license models, we will look at why IT departments and businesses should consider embracing this model for their own reasons.

To the business itself, subscription licensing offers some significant value, especially to finance departments.  Through moving to subscription licensing we are generally able to move from capital expenses (capex) to operational expenses (opex) which is generally seen as favorable.  But subscription value is far larger than that.  Subscription pricing gives cost predictability.  A finance department can accurately predict their costs over time rarely being surprised whereas in the old approach software was largely forgotten and then some need would require an old package to be updated and suddenly a very large invoice would be forthcoming with potentially very little warning (often followed by large re-training expenses due to the possibly large gap in software versions.)  With subscription pricing, costs normally fluctuate fluidly with employee count.  As new employees are hired the finance department can predict exactly how much they will cost.  And when employees leave subscriptions can be discontinued and cost reduced.  Only software that is truly used is purchased.  The need to overbuy to account for fluctuations or predicted growth no longer exists.  Subscription licensing also leverages the time-value of money allowing businesses to hold onto their funds for as long as possible requiring them to pay only for what they use as they use it.

For IT the benefits are even greater.  IT should benefit from having a better relationship with finance and human resources as the costs and needs of incoming or outgoing users are better understood.  This eliminates some of the friction between these departments which is always beneficial.

IT also benefits from the effective enforcement of best practices.  It is common for IT departments to struggle to convince businesses to invest in newer versions of software which often results in support issues and unnecessary complexity and less than happy users.  With subscription pricing, IT is constantly supplied with the latest software for users which, in nearly all cases, is an enormous benefit both to IT and to the users of the software.  This eliminates much of the friction that IT experiences with the business and with management by moving the need for updates to an external mandate and no longer something that IT or the users must request.

IT benefits from easier license management on their end as well.  It is generally far easier to determine license availability and need.  Audits are unnecessary because the licensing process is generally handled (generally, nothing technically requires this) via an authentication mechanism with the vendor which means that unless specific effort is taken to violate licencing (cracking software or some other extreme measure) that licensing accidents are unlikely and easy to correct.

IT may also benefit from easier ability to handle complex licensing situations such as providing a higher feature set level for one user and not for another.  Licenses can often be purchased at a minimum level and upgraded if more needs are discovered.  The ability to easily customize per user and over time means that IT can deliver more value with less effort.

Many of the objections with subscription licensing are not actually with subscription licensing itself.  Often it is a perception of higher cost.  This is, of course, difficult to prove since any given company may choose to charge anything that they want for different license options.  Microsoft offers both subscription and non-subscription license options for some of their key products such as MS Office.  This gives us a chance to see how they see the cost differences and benefits and to compare the options so that we can find the most cost effective option for our own business.  By keeping both models Microsoft can be audited by their customers to keep costs of each model in line.  However, by offering both they also lose many of the benefits that pure subscription models bring such as needing to support only a single version at a time.

Adobe, on the other hand, made the switch from traditional licensing to subscription licensing basically all at once and appears to have decided to raise their prices at the same time.  This is very misleading because Adobe actually raised the price, and it is not the subscription model creating the price increase.  The benefits of subscription pricing are benefits of the model.  The pricing decisions of any given vendor are a separate thing and must be evaluated in the same way that any pricing evaluation is done.

The other common complaint that I have heard many times is an inability to “own” software.  This is a natural reaction but one that IT and business units should not have.  In a business setting software is not owned by people and we should have no emotional ties to it.  Software is just another tool for completing our work and whatever gives us the best ability to do that, at the best price, is what we want.  From a purely business perspective, owning software is irrelevant.  The desire to own things is a human reaction that is not conducive to good business thinking.  It is also very valuable to point out that IT should never have this mental reaction to owning software – it is the business, not the IT department or the IT professionals, who own software in their business.  IT is simply selecting, deploying, configuring and managing the software on behalf of the business that it supports.

Overall I truly believe that subscription licensing models are good, in general, for nearly everyone involved.  They benefit vendors in such a way that it enables them to be more viable and profitable, while making it easier for IT departments to deliver better value to their users often while enforcing many best practices that businesses would otherwise be tempted to avoid.  The improved profitability may also encourage vendors to pursue niche software titles that would have been previously unaffordable to create and support.  Vendors, IT and end users are nearly universal winners while businesses face the only real grey area where pricing may or may not be beneficial to them in this model.

Originally posted on the StorageCraft Blog.

The Weakest Link: How Chained Dependencies Impact System Risk

When assessing system risk scenarios it is very easy to overlook “chained” dependencies.  We are trained to look at risk at a “node” level asking “how likely is this one thing to fail.”  But system risk is far more complicated than that.

In most systems there are some components that rely on other components. The most common place that we look at this is in the design of storage for servers, but it occurs in any system design.  Another good example is how web applications need both application hosts and database hosts in order to function.

It is easiest to explain chained dependencies with an example.  We will look at a standard virtualization design with SAN storage to understand where failure domain boundaries exist and where chained dependencies exist and what role redundancy plays in system level risk mitigation.

In a standard SAN (storage area network) design for virtualization you have virtualization hosts (which we will call the “servers” for simplicity), SAN switches (switches dedicated for the storage network) and the disk arrays themselves.  Each of these three “layers” is dependent on the others for the system, as a whole, to function.  If we had the simplest possible set with one server, one switch and one disk array we very clearly have three devices representing three distinct points of failure.  Any one of the three failing causes the entire system to fail.  No one piece is useful on its own.  This is a chained dependency and the chain is only as strong as its weakest link.

In our simplistic example, each device represents a failure domain.  We can mitigate risk by improving the reliability of each domain.  We can add a second server and implement a virtulization layer high availability or fault tolerance strategy to reduce the risk of server failure.  This improves the reliability of one failure domain but leaves two untouched and just as risky as they were before.  We can then address the switching layer by adding a redundant switch and configuring a multi-pathing strategy to handle the loss of a single switching path reducing the risk at that layer.  Now two failure domains have been addressed.  Finally we have to address the storage failure domain which is done, similarly, by adding redundancy through a second disk array that is mirrored to the first and able to failover transparently in the event of a failure.

Now that we have beefed up our system, we still have three failure domains in a dependency chain.  What we have done is made each “link” in the chain, each failure domain, extra resilient on its own.  But the chain still exists.  This means that the system, as a whole, is far less reliable than any single failure domain within the chain is alone.  We have made something far better than where we started, but we still have many failure domains.  These risks add up.

What is difficult in determining overall risk is that we must assess the risk of each item, then determine the new risk after mitigation (through the addition of redundancy) and then find the cumulative risk of each of the failure domains together in a chain to determine the total risk of the entire system.  It is extremely difficult to determine the risk within each failure domain as the manner of risk mitigation plays a significant role.  For example a cluster of storage disk arrays that fails over too slowly may result in an overall system failure even when the storage cluster itself appears to have worked properly.  Even defining a clear failure can therefore be challenging.

It is often tempting to take a “from the top” view assessment of risk which is very dangerous, but very common for people who are not regular risk assessment practitioners.  The tendency here is to look at the risk only viewing the “top most” failure domain – generally the servers in a case like this, and ignoring any risks that sit beneath that point considering those to be “under the hood” rather than part of the risk assessment.  It is easy to ignore the more technical, less exposed and more poorly understood components like networking and storage and focus on the relatively easy to understand and heavily marketed reliability aspects of the top layer.  This “top view” means that the risks under the top level are obscured and generally ignored leading to high risk without a good understanding of why.

Understanding the concept of chained dependencies explains why complex systems, even with complex risk mitigation strategies, often result in being far more fragile than simpler systems.  In our above example, we could do several things to “collapse” the chain resulting in a more reliable system as a whole.

The most obvious component which can be collapsed is the networking failure domain.  If we were to remove the switches entirely and connect the storage directly to the servers (not always possible, of course) we would effectively eliminate one entire failure domain and remove a link from our chain.  Now instead of three chains, each of which has some potential to fail, we have only two.  Simpler is better, all other things being equal.

We could, in theory, also collapse in the storage failure domain by going from external storage to using storage local to the servers themselves essentially taking us from two failure domains down to a single failure domain – the one remaining domain, of course, is carrying more complexity than it did before the collapsing, but the overall system complexity is greatly reduced.  Again, this is with all other factors remaining equal.

Another approach to consider is making single nodes more reliable on their own.  It is trendy today to look at larger systems and approach risk mitigation in that way, by adding redundant, low cost nodes to add reliability to failure domains.  But traditionally this was not the default path taken to reliability.  It was far more common in the past, as is shown in the former prevalence of mainframe and similar classed systems, to build in high degrees of reliability into a single node.  Mainframe and high end storage systems, for example, still do this today.  This can actually be an extremely effective approach but fails to address many scenarios and is generally extremely costly, often magnified by a need to have systems partially or even completely maintained by the vendor.  This tends to work out only in special niche circumstances and is not practical on a more general scope.

So in any system of this nature we have three key risk mitigation strategies to consider: improve the reliability of a single node, improve the reliability of a single domain or reduce the number of failure domains (links) in the dependency chain.  Putting these together as is prudent can help us to achieve the risk mitigation level appropriate for our business scenario.

Where the true difficulty exists, and will remain, is in the comparison of different risk mitigation strategies.  The risk of a single node can generally be estimated with some level of confidence.  A redundancy strategy within a single domain has far less ability to be estimated – some redundancy strategies are highly effective, creating extremely reliably failure domains while others can actually backfire and reduce the reliability of a domain!  The complexity that often comes with redundancy strategies is never without caveat and while it will typically pay off, it rarely carries the degree of reliability benefit that is initially expected.  Estimating the risk of a dependency chain is therefore that much more difficult as it requires a clear understanding of the risks associated with each of the failure domains individually as well as an understanding of the failure opportunity existing at the domain boundaries (like the storage failover delay failure noted earlier.)

Let’s explore the issues around determining risk in two very common approaches to the same scenario building on what we have discussed above.

Two extreme examples of the same situation we have been discussing are a single server with internal storage used to host virtual machines versus a six device “chain” with two servers and using a high availability solution at the server layer, two switches with redundancy at the switching layer and two disk arrays providing high availability at the storage layer.  If we switch any large factor here we can generally provide a pretty clear estimate of relative risk – if any of the failure domains lacks reliable redundancy, for example – we can pretty clearly determine that the single server is the more reliable overall system except in cases where an extreme amount of single node reliability is assigned to a single node, which is generally an impractical strategy financially.  But with each failure domain maintaining redundancy we are forced to compare the relative risks of intra-domain reliability (the redundant chain) vs. inter-domain reliability (the collapsed chain, single server.)

With the two entirely different approaches there is no reasonable way to assess the comparative risks of the two means of risk mitigation.  It is generally accepted that the six (or more) node approach with extensive intra-domain risk mitigation is the more reliable of the two approaches and this is almost certainly, generally true.  But it is not always true and rarely does this approach outperform the single node strategy by a truly significant margin while commonly costing four to ten fold as much as the single server strategy.  That is potentially a very high cost for what is likely a small gain in reliability and a small potential risk of a loss in reliability.  Each additional piece of redundancy adds complexity that a human must implement, monitor and maintain and with complexity and human interaction comes more and more risk.  Avoiding human error can often be more important than avoiding mechanical failure.

We must also consider the cost of recovery.  If failure is to occur it is generally trivial to recover from the failure of a simple system.  An extremely complex system, having failed, may take a great degree of effort to restore to a working condition.  Complex systems also require much broader and deeper degrees of experience and confidence to maintain.

There is no easy answer to determining the reliability of systems.  Modern information delivery systems are simply too large and too complex with too many indeterminable factors to be able to evaluate in all cases.  With a good understanding of chained dependencies, however, and an understanding of risk mitigation strategies we can take practical steps to determine roughly relative risk levels, see where similar risk scenarios compare in cost, identify points of fragility, recognize failure domains and dependency chains,  and appreciate how changes in system design will move us clearly towards or away from reliability.

One Big Flat Network

There is a natural movement of networks to become unnecessarily complicated.  But there is great value in keeping networks clean and simple.  Simple networks are easier to manage, more performant and more reliable while being generally less expensive.  Every network needs a different level of complexity and large networks will certainly need an extensive level of it, but small businesses can often keep networks extremely simple which is part of what makes smaller businesses more agile and less expensive, giving them an edge over their larger counterparts.  This is an edge that they must leverage because they lack the enterprise advantage of scale.

There are two ways to look at network complexity.  The first is the physical network – the actual setup of the switches and routers that make up the network.  The second is the logical network – how IP address ranges are segmented, where routing barriers exist, etc.  Both are important to consider when looking at the complexity of your network.

It should be the goal of any network to be as simple as possible while still meeting all of the goals and requirements of the network.  

The first aspect we will address is the physically flat network.   Reducing a physical network to be flat can have a truly astounding effect on the performance and reliability of  that network.  In a very small network this could mean working from a single switch for all connections.  Typically this is only available for the very smallest networks as switches rarely are available above forty-eight or possibly fifty-two ports.  But for many small businesses this is completely possible.  It may require additional cabling for a building, in order to bring all connections back to a central location, but can often be attained – at least on a site by site basis.  Many businesses today have multiple locations or staff working from home and this can make the network challenges much greater, although each location can strive for its own simplicity in those cases.

As a network grows the concept of the single switch can be grown as well using the concept of switch stacking.  Stacked switches share a single switching fabric or backplane.  When stacked they behave as a single switch but with more ports.  (Some switches do true backplane sharing and some mimic this with very high speed uplink ports with shared management via that port.)  A switch stack is managed as a single switch making network management no more difficult, complex or time consuming for a stack than for a single switch.  It is common for a switch stack to grow to at least three hundred ports if not more.  This allows for much larger physical site growth before needing to leave the single switch approach.

In some cases, some large module single switch chassis will grow even larger than this allowing for four hundred or more ports in a single switch but in a “blade like” enterprise switching chassis.

By being creative and looking at simple, elegant solutions it is entirely possible to keep even a moderately large network contained to a single switching fabric allowing all network connections to share a single backplane.

The second area that we have to investigate is the logical complexity of the network.  Even in physically simple networks it is common to find small businesses investing a significant amount of time and energy into implementing unnecessary subnets or VLANs and all of the overhead that comes with those.

Subnetting is rarely necessary in a small or even a smaller medium-sized business.  Traditionally, going back to the 1990s, it was very common to want to keep subnets to a maximum of 256 devices (or a /24 subnet) because of packet collision, broadcasts and other practical issues.  This made a lot of sense in that era when hubs were used instead of switches and broadcasts were common and network bandwidth was lucky if it was 10Mb/s on a shared bus.  Today’s broadcast light, collision free, 1Gb/s dedicated channel networks experience network load in a completely different manner.  Where 256 devices on a subnet was an extremely large network then, having more than 1,000 devices on a single subnet is a non-issue today.

These changes in how networks behave mean that small and medium businesses almost never need to subnet for reasons of scale and can comfortably use a single subnet for their entire business reducing complexity and easing network management.  More than a single subnet may be necessary to support specific network segmentation like separating production and guest networks, but scale, the reason traditionally given for subnetting networks, becomes an issue solely of larger businesses.

It is tempting to want to implement VLANs on every small business environment as well.  Subnetting and VLANs are often related and often confused, but subnets often exist without VLANs, while VLANs do not exist without subnets.

In large environments VLANs are a foregone conclusion and it is simply assumed that they will exist.  This mentality often filters down to smaller organizations who are often tempted to apply this to businesses which lack the scale that makes VLAN management make sense.  VLANs should be relatively uncommon in a small business network.

The most common place where I see VLANs used when they are not needed is in Voice over IP or VoIP networks.  It is a common assumption that VoIP has special needs that require VLAN support.  This is not true.  VoIP and the QoS that it sometimes needs are available without VLANs and often will work better without them.

VLANs really only become important when either management is needed at large scale (where scale is larger than a single subnet can provision) and cannot be physically segregated or when specific network-layer security is needed which is relatively rare in the SMB market.  VLANs are very useful and do have their place.  VLANs are often used if a dedicated guest network is needed but generally in a small business guest access is provided via a direct guest connection to the Internet rather than a quarantined network for guests.

The most common practical use of a VLAN in an SMB is likely to be a walled garden DMZ designed for quarantined BYOD remote access where BYOD devices connect much like guests but have the ability to access remote access resources like RDP, ICA or PCoIP protocols.  VLANs would also be popular for building traditional DMZs for externally facing public services such as web and email servers – except that these services are not commonly kept on the local network for hosting in today’s SMBs so this classic use of VLANs in the SMB is rapidly fading.

Another use case where VLANs are often used inappropriately is for a Storage Area Network or SAN.  It is best practice that a SAN be a completely independent (air gapped), physically unique network unrelated to the regular switching infrastructure.  It is generally not advised that a SAN be created using VLANs or subnets but instead be on dedicated switches.

It is tempting to add complex switching setups, additional subnets and VLANs because we hear about these things from larger environments, they are fun and exciting, and they appear to add job security by making the network more difficult to maintain.  Complex networks require higher end skills and can seem like a great way to use that networking certificate.  But in the long run, this is a bad career and IT strategy.  Network complexity should be added in a lab for learning purposes, not in production networks.  Production networks should be run as simply, elegantly and cost effectively as possible.

With relatively little effort, a small business network can likely be designed to be both physically and logically very simple.  The goal, of course, is to come as close as possible to creating single, flat network structure where all devices are physical and logical peers with no unnecessary bottlenecks or protocol escalations.  This improves performance and reliability, reduces costs and frees IT resources to focus on more important tasks.

Originally posted on the StorageCraft Blog.

Starting the IT Clock Ticking

Everyone is always happy to tell you how important experience is over certifications and degrees when working in IT. Few things are so readily agreed upon within the industry. What is shocking, however, is how often that advice does not get translated into a practical reality.

New IT hopefuls, when asking for guidance, will be told the value of experience but then sent everywhere except towards experience with the advice that they receive. This makes no sense. When applying for IT jobs, hiring managers and human resources departments are interested in knowing when you started in IT and how many years you have been in the field. That’s a hard number and one that you can never change once it has been set. Your start date is a factor of your career with which you are stuck for the rest of your life. You can get a degree anytime. You can get certified anytime. But your entry date into the field is permanent, it is the most important thing that an IT professional hopeful needs to be focused on.

Many things will qualify as the first “start date” in a career. What is important is getting into a real IT position, or a software development position, to affix that date as early as possible.  (Nearly everyone in the field accepts the software engineering field as experience directly relevant to IT even though it is technically not IT.)  This counts towards experience which can, in turn, count towards other things including eligibility for positions, pay increases or even vacation accrual or similar benefits. Often IT professional hopefuls do not think about the range of possibilities for establishing that entry date into the field and overlook opportunities or they downplay the value of the entry date and opt out of opportunities that would have greatly benefited them choosing to focus, instead, or more “socially accepted” activities that ultimately play a far smaller role in their overall career.

The most obvious example of an IT entry date is obtaining an entry level position in the field.  Because this is so obvious, many people forget that there are other options and can easily become overly focused on finding their first “normal” job, typically on a helpdesk, and may lose sight of everything else.

Even worse,  it is common for assumptions to be made about how a first job is typically acquired and then, because of the assumed steps to get from A to B, often the focus shifts to those steps and the real goal is missed completely.  For example, it is often assumed that a college degree and industry certifications are requirements for getting into an entry level position.  Certainly it is true that an education and certifications can make breaking into the industry much easier.  But these themselves are not the goal, they are tools to achieve the goal.  Getting work to start a career is the goal, but often those extra steps get in the way of career opportunities and a loss of focus leads would-be IT pros to misstep and skip career opportunities because they have become focused on proximate achievements like certifications rather than looking at their life from a goal level.

I have heard many times IT students ask if they should take a  job offer in their chosen career or continue with a degree path instead.  Even if the job is very good, it seems that almost ubiquitously the choice will be made to turn down the critical professional position because the student has lost focus and is thinking of the proximate goal, their education, and forgetting about the true goal, their career.  This reaction is far more common than anyone would realize and very damaging to students’ prospects.  Perhaps they often feel that since an opportunity came along before they had completed their studies that good entry level positions are common and easy to acquire, perhaps they simply forgot why they were going to school in the first place and perhaps they simply are not concerned with their careers and wish to spend their time relaxing in college before taking that next step.  Many students probably fear being able to complete their education if they take a position in IT before completing but there are very good options for this that would allow for both the critical needs of their career and completing their education in a good way too.  Taking a career position does not need to have a negative impact on the ability to complete an education if the educational process is deemed to still be important.

There are several avenues that allow for starting the “career clock”, as I like to think of it.  The easiest for most people, especially those relatively young, is to find an internship.  Internships can be found even very young, middle school or early high school and generally into the mid or even late twenties.   Internships can be amazingly valuable, both because they often allow the earliest entry into the field (specifically unpaid internships) generally many years earlier than other options with the fewest up front expectations.  Students pursuing internships from a young age can often get a career jump of two to ten years on their non-interning counterparts!  The ability to leap forward in your career can be dramatic.  Internships abound and few students take the time and effort to invest in them.  Those students honestly interested in an internship will likely have no problems securing one.

Internships can be much more valuable than regular jobs because they, by definition, should include some amount of mentorship and projects designed to educate.  An entry level job typically focuses on simple, highly repeatable tasks that teach relatively little while a real internship should focus on growing and developing skills and an understanding of the IT discipline.  Because of this, a good internship will generally build a resume and establish experience much faster than most other methods, often allowing a wider range of exposure to different areas of IT.

Another good path for getting into IT as early as possible is volunteer work.  This is a little like interning except requires more effort and determination on the part of the hopeful IT professional and lacks the expectation of mentoring and oversight.  A volunteer role is always unpaid but because of this often offers a lot of flexibility and opportunity.  There are many places that need or welcome IT volunteers such as churches, private schools and other non-profits running on tight budgets.  With volunteer work you will often get greater decision-making opportunities and likely exposing the needs to think of IT within financial constraints which, while typically tighter at a not for profit, exists in every instance of IT.  This business exposure is even better for resume building.

Volunteering is generally more difficult to do at a young age and a level of maturity and knowledge is often needed but not in all cases.  Volunteering at a larger non-profit which already has paid IT or more senior volunteer IT might combine volunteering and a nearly intern-like situation.  Whereas a smaller non-profit, often like churches or similar, might result in dealing with IT alone which can be very educational but potentially daunting and even overwhelming to a younger or nascent IT professional in the making.  A volunteer in a small non-profit may be in a position to run an IT shop, from top to bottom, before even being employed in their first traditional position.

Of course no single approach need be taken alone.  Interning with a for profit firm and volunteering as well can be even better, making for an even stronger and more valuable IT entry point.  Sometimes intern or volunteer work may continue even after traditional, paying employment is found because one pays the bills while the other builds the resume.

Even less traditional options can exist such as starting a business on your own, which is generally extremely difficult and often not possible at a young age or finding traditional work while very young.  Starting a business will often teach a large volume of business skills and a small amount of IT ones and can be extremely valuable at a potentially devastating cost.  Compared to other approaches this is very risk under normal circumstances.  It certainly can be done but would rarely be considered the best choice.

What matters most is finding a position that establishes a starting point into IT.  Once that stake is driven into the proverbial ground it is set and the focus can shift to skill acquisition, broader experience, education, certifications or whatever is needed to take the career to the next level.  All of those subsequent skills are soft, they can be enhanced as needed.  But that starting date can never be moved and is absolutely crucial.

It is often not well communicated to high school and college age IT hopefuls that these opportunities are readily available and just how important they are.  So often society or the established education machine encourage students and those in the collegiate ages to discount professional opportunities and focus on education to the detriment of their experience and long term careers.  IT and software development are not careers that are well supported by traditional career planning and are especially not well suited to people who wait to jump into them until they feel “ready” because there will always be those with ambition and drive doing so at a far younger age who will have built a career foundation long before most of their peers even consider their futures.  IT is a career path that rewards the bold.

There is no need to follow the straight and narrow traditional path in IT.  That path exists and many will follow it; but it is not the only path and those that stray from it will often find themselves at a great advantage.

No matter what path you choose to take in your pursuit of a career in IT, be sure to be extremely conscious of the need to not just acquire skills but to establish experience and start the clock ticking.

Originally published on the StorageCraft Blog.

IT Generalists and Specialists

IT Professionals generally fall into two broad categories based on their career focus: generalists and specialists. These two categories actually carry far more differences than they may at first appear to do and moving between them can be extremely difficult once a career path has been embarked upon; often the choice to pursue one path or the other is made very early on in a career.

There are many aspects that separate these two types of IT professionals, one of the most poignant and misunderstood is the general marketplace for these two skillsets. It is often assumed, I believe, that both types exist commonly throughout the IT market but this is not true. Each commands its own areas.

In the small and medium business market, the generalist rules. There is little need for specialties as there are not enough technical needs in any one specific area to warrant a full time staff member dedicating themselves to them. Rather, a few generalists are almost always called upon to handle a vast array of technical concerns. This mentality also gives way to “tech support sprawl” where IT generalists are often called upon to venture outside of IT to manage legacy telephones, electrical concerns, HVAC systems and even sprinklers! The jack of all trades view of the IT generalist has a danger of being taken way too far.

It should be mentioned, though, that in the SMB space the concept of a generalist is often one that remains semi-specialized. SMB IT is nearly a specialization on its own. Rather than an SMB generalist touching nearly every technology area it is more common for them to focus across a more limited subset. Typically an SMB generalist will be focused primarily on Windows desktop and server administration along with application support, hardware management and some light security. SMB generalists may touch nearly any technology but the likelihood of doing so is generally rather low.

In the enterprise space, the opposite is true. Enterprise IT is almost always broken down by departments, each department handling very focused IT tasks. Typically these include networking, systems, storage, desktop, helpdesk, application specific support, security, datacenter support, database administration, etc. Each department focuses on a very specific area, possibly with even more specialization within a department. Storage might be broken up by block and file. Systems by Windows, mainframe and UNIX. Networking by switching and firewalls. In the enterprise there is a need for nearly all IT staff to be extremely deep in their knowledge and exposure to the products that

they support while needing little understanding of products that they don’t support as they have access to abundant resources in other departments to guide them where there are cross interactions. This availability of other resources and a departmental separation of duties, highlights the differences in generalists and specialists.

Generalists live in a world of seeing “IT” as their domain to understand and oversee, potentially segmented by “levels” of difficulty rather than technological focus and typically a lack of specialized resources to turn to internally for help. While specialists live in a world of departmental division by technology where there are typically many peers working at different experience levels within a single technology stack.

It is a rare SMB that would have anything but a generalist working there. It is not uncommon to have many generalists, even generalists who lean towards specific roles internally but who remain very general and lacking a deep, singular focus. This fact can make SMB roles appear more specialized that they truly are to IT professionals who have only experienced the SMB space. It is not uncommon for SMB IT professionals to not even be aware of what specialized IT roles are like.

A good example of this is that job titles common and generally well defined in the enterprise space for specialists are often used accidentally or incorrectly with generalists not realizing that the job roles are specific. Specialists titles are often used for generalists positions that are not truly differentiated.

Two exceptionally common examples are the network engineering and IT manager titles.  For a specialist, network engineer means a person whose full time, or nearly full time, job focus is in the design and planning and possibly implementation of networks including the switching, routing, security, firewalling, monitoring, load balancing and the like, of the network itself.  They have no role in the design or management of the systems that use the network, only the network itself.  Nor do they operate or maintain the network, that is for the network administrator to do who, again, only touches switches, routers, firewalls, load balancers and so forth not computers, printers, servers and other systems.  It is a very focused title.  In the SMB it is common to give this title to anyone who operates any device on a network often with effectively zero design or network responsibilities at all.  No role overlaps.

Likewise in the enterprise an IT manager is a management role in an IT department.  What an IT manager manages, like any manager, is people.  In the SMB this title may be used correctly but it is far more common to find the term applies to the same job role to which network engineer is used – someone who has no human reports and manages devices on a network like computers and printers.  Not a manager at all, but a generalist administrator.  Very different than what the title implies or how it is expected to be used in the large business and enterprise space.

Where specialists sometimes enter the SMB realm is through consultants and service providers who provide temporary, focused technical assistance to smaller firms that cannot justify having those skills maintained internally. Typically areas where this is common is storage and virtualization where consultants will often design and implement core infrastructure components and leave the day to day administration of them to the in-house generalists.

In the enterprise the situation is very different. Generalists do exist but, in most cases, the generalization is beaten out of them as their careers take them down the path of one specialization or another. Entry level enterprise workers will often come in without a clear expectation of a specialization but over time find themselves going into one quite naturally. Most, if not all, IT growth paths through enterprise IT require a deep specialization (which may mean focusing on management rather than technical.) Some large shops may provide for cross training or exposure to different disciplines but rarely is this extensively broad and generally does not last once a core specialization is chosen.

This is not to say that enterprises and other very large shops do not have generalists, they do. It is expected that at highest echelons of enterprise IT that the generalists roles will begin to reemerge as new disciplines that are not seen lower in the ranks. These titles are often labeled differently such as architect, coordinator or, of course, CIO.

The reemergence of generalists at the higher levels of enterprise IT poses a significant challenge for an industry that does little to groom generalists. This forces the enterprise generalist to often “self-groom” – preparing themselves for a potential role through their own devices. In some cases, organic growth through the SMB channels can lead to an enterprise generalist but this is extremely challenging due to the lack of specialization depth available in the majority of the SMB sector and a lack of demonstrable experience in the larger business environment.

These odd differences that almost exclusively fall down SMB vs. enterprise lines creates a natural barrier, beyond business category exposure, to IT professionals migrating back and forth between larger and smaller businesses. The type of business and work experience is vastly different and the technology differences are dramatically different. Both enterprise IT pros are often lost moving to an SMB and SMB pros find that what they felt was deep, focused experience in the SMB is very shallow in the enterprise. The two worlds operate differently at every level, but outside of IT the ability to move between them is far easier.

Enterprise IT carries the common titles that most people associate with IT career specialization: system administration, network engineer, database administrator, application support, helpdesk, desktop support, datacenter technician, automation engineer, network operations center associate, project manager, etc. SMB titles are often confusing both inside of and outside of the industry. It is very common for SMB roles to coopt specialization titles and apply them to roles that barely resemble their enterprise counterparts in any way and don’t match the expectation of a title at all, as I demonstrated earlier. This further complicates the fluid movement between realms as both sides become increasingly confused trying to understand how people and roles related to each other coming from the other realm. There are titles associated with generalists, such as the rather dated LAN Administration, IT Generalist and architect titles but their use, in the real world, is very rare.  The SMB struggles to define meaningful titles and has no means by which to apply or enforce these across the sector.  This lack of clear definition will continue to plague both the SMB and generalists who have little ability to easily convey the nature of their job role or career path.

Both career paths offer rewarding and broad options but the choice between them does play a rather significant role in deciding the flavor of a career.  Generalists, beyond gravitating towards smaller businesses, will also likely picking up a specialization in an industry over time as they move into higher salary ranges (manufacturing, medical, professional services support, legal, etc.)  Specialists will find their focus is in their technology and their focus on market will be less.  Generalist will find it easier to find work in any given local market, specialists will find that they often need to move to major markets and potentially only the core markets will provide great growth opportunities but within those markets mobility and career flexibility will be very good.  Generalists have to work hard to keep up with a broad array of technologies and changes in the market.  Specialists will often have deep vendor resources available to them and will find the bulk of their educational options come directly from the vendors in their focus area.

It is often personality that pushes young IT professionals into one area or the other.  Specialists are often those that love a particular aspect of IT and not others or want to avoid certain types of IT work as well as those that look at IT more as a predetermined career plan.  Generalists often come from the ranks of those that love IT as a whole and fear being stuck in just one area where there are so many aspects to explore.  Generalists are also far more likely to have “fallen into” IT rather than having entered the field having a strategic plan.

Understanding how each approaches the market and how the markets approach IT professionals help the IT professional have an opportunity to assess what it is that they like about their field and make good career choices to keep themselves happy and motivated and allows them to plan in order to maximize the impact of their career planning decisions.  Too often, for example, small business generalists will attempt to do a specialization focus, very often in enterprise Cisco networking just as a common example, which have almost no potential value to the marketplace where their skills and experience are focused.  Professionals doing this will often find their educational efforts wasted and be frustrated that the skills that they have learned go unused and atrophy while also being frustrated that gaining highly sought skills do not appear to contribute to new job opportunities or salary increases.

There is, of course, opportunity to move between general and special IT roles.  But the more experience a professional gains in one area or the other, the more difficult it becomes to make a transition, at least without suffering from a dramatic salary loss in order to do so.  Early in an IT career, there is relatively high flexibility to move between these areas at the point where the broadening of generalization is minimal or the deep technical skills of specialization are not yet obtained.  Entry level positions in both areas are effectively identical and there is little differentiation in career starting points.

Greater perspective on IT careers gives everyone in the field more ability and opportunity to pursue and achieve the IT career that will best satisfy their technical and personal work needs.