The End of the GUI Era

We should set the stage by looking at some historical context around GUIs and their role within the world of systems administration.

In the “olden days” we did not have graphical user interfaces on any computers at all, let alone on our servers. Long after GUIs began to become popular on end user equipment, servers still did not have them. In the 1980s and 1990s the computational overhead necessary to produce a GUI was significant in terms of the total computing capacity of a machine and using what little that there was to produce a GUI was rather impractical, if often not even completely impossible. The world of systems administration grew up in this context, working from command lines because there was no other option available to us. It was not common for people to desire GUIs for systems administration, perhaps because the idea had not occurred to people yet.

In the mid-1990s Microsoft, along with some others, began to introduce the idea of GUI-driven systems administration for the entry level server market. At first the approach was not that popular as it did not match how experienced administrators were working in the market. But slowly, as new Windows administrators and to some degree as Novell Netware administrators, began to “grow up” with access to GUI-based administration tools there began to be an accepted place in the server market for these systems. In the mid to late 1990s the UNIX and other non-Windows servers completely dominated the market. Even VMS was a major player still and on the small business and commodity server side Novell Netware was the dominant player mid-decade and still a very serious contender late in the decade. Netware offered a GUI experience but one that was very light and should probably be considered only “semi-GUI” in comparison to Windows NT’s rich GUI experience offered by at least 1996 and to some degree earlier with the NT 3.x family, although Windows NT was only just finding its place in the world before NT 4’s release.

Even at the time, the GUI-driven administration market remained primarily a backwater. Microsoft and Windows still had no major place on the server side but was beginning to make inroads via the small business market where their low cost and easy to use products made a lot sense. But it was truly the late 1990s panic and market expansion brought on by the combination of the Y2K scare, the dotcom market bubble and excellent product development and marketing by Microsoft that a significant growth and shift to a GUI-driven administration market occurred.

The massive expansion of the IT market in the late 1990s meant that there was not enough time or resources to train new people entering IT. The learning curve for many systems, including Solaris and Netware, was very steep and the industry needed a truly epic number of people to go from zero to “competent IT professional” faster than it was possible to do with the existing platforms of the day. The market growth was explosive and there was so much money to be made working in IT that there were no available resources to effectively train new people who needed to be coming into IT as anyone qualified to handle educational duties was also able to earn so much more working in the industry rather than working in education. As the market grew, the value of mature, experienced professionals became extremely high, as they were more and more rare in the ever expanding field as a whole.

The market responded to this need in many ways but one of the biggest ones was to fundamentally change how IT was approached. Instead of pushing IT professionals to overcome the traditional learning curves and develop the needed skills to effectively manage the systems that were on the market at the time, the market changed which tools that they were using to accommodate less experienced and less knowledgeable IT staff. Simpler and often more expensive tools often with GUI interfaces began to flood the market allowing those with less training and experience to at least begin to be useful and productive almost immediately even without ever having seen a product previously.

This change coincided with the natural advancement of the performance of computer hardware. It was during this era that for the first time the power of many systems was such that while the GUI still made a rather significant impact to performance, the lower cost of support staff and speed at which systems could be deployed and managed generally offset this loss of computing capacity taken by the GUI. The GUI rapidly became a standard addition to systems that just a few years before would never have seen one.

To improve the capabilities of these new IT professionals and to rush them into the marketplace the industry also shifted heavily towards certifications, more or less a new innovation at the time, which allowed new IT pros, often with no hands on experience of any kind, to establish some degree of competence and to do so commonly without needing any significant interaction or investment from existing IT professionals like university programs would require. Both the GUI-based administration market, as well as the certification industry, boomed; and the face IT significantly changed.

The result certainly was a flood of new, untrained or lightly trained IT professionals entering the market at a record pace. In the short term this change work for the industry. The field went from dramatically understaffed to relatively well staffed years faster than it could have done so otherwise. But it did not take long before the penalties for this rapid uptake of new people began to appear.

One of the biggest impacts to the industry was that there was an industry-wide “baby boom” with all of the growing pains that that would entail. An entire generation of IT professionals grew up in the boot camps and rapid “certification training” programs of the late 1990s. This resulted in a long term effect of the rules of thumb and general approaches common in that era becoming often codified to the point of near religious belief in a way that previous, as well as later, approaches would not. Often, because education was done quickly and shallowly, many concepts had to be learned by rote without an understanding of the fundamentals behind them. As the “Class of 1998” grew into the senior IT professionals in their companies over time, they became the mentors of new generations and that old rote learning has very visibly trickled down through similar approaches in the years since, even long after the knowledge is outdated or impractical and in many cases it has been interpreted incorrectly and is wrong in predicable ways even for the era from which it sprang.

Part of this learning of the era was a general acceptance that GUIs were not just acceptable but that they were practical and expected. The baby boom effect meant that there was little mentorship from the former era and previously established practices and norms were often swept away. The baby boom effect meant that the industry did not exactly reinvent itself as much as it simple invested itself. Even the concept of Information Technology as a specific industry unto itself took its current form and took hold in the public consciousness during this changing of the guards. Instead of being a vestige or other departments or disciplines, IT came into its own; but it did so without the maturing and continuity of practices that would have existed with more organic growth leaving the industry in possibly a worse position than it might have been would it have developed in a continuous fashion.

The lingering impact of the late 1990s IT boom will be felt for a very long time as it will take many generations for the trends, beliefs and assumptions of that time period to finally be swept away. Slowly, new concepts and approaches are taking hold, often only when old technologies disappear and knew ones are introduced breaking the stranglehold of tradition. One of these is the notion of the GUI being the dominant method by which systems administration is accomplished.

As we pointed out before, the GUI at its inception was a point of differentiation between old systems and the new world of the late 1990s. But since that time GUI administration tools have become ubiquitous in their availability. Every significant platform has and has long had graphical administration options so the GUI no longer sets any platform apart in a significant way. This means that there is no longer any vendor with a clear agenda driving them to push the concept of the GUI. The marketing value of the GUI is effectively gone. Likewise, not only did systems that previously lacked a strong GUI nearly all develop one (or more) but the GUI-based systems that did not have strong command line tools went back and developed those as well and developed new professional ecosystems around them. The tide most certainly turned.

Furthermore, over the past nearly two decades the rhetoric of the non-GUI world has begun to take hold. System administrators working from a position of a mastery of the command line, on any platform, generally outperform their counterparts leading to more career opportunities, more challenging roles and higher incomes. Companies focused on command line administration find themselves with more skilled workers and a higher administration density which, in turn, lowers overall cost.

This alone was enough to make the position of the GUI begin to falter. But there was always the old argument that GUIs, even in the late 1990s, used a small amount of system resources and only added a very small amount of additional attack surface. Even if they were not going to be used, why not have them installed “just in case.” As CPUs got faster, memory got larger, storage got cheaper and as system design improved the impact of the GUI became less and less so this argument of having GUIs available got stronger. Especially strong was the proposal that GUIs allowed junior staff to do tasks as well making them more useful. But it was far too common for senior staff to retain the GUI as a crutch in these circumstances.

With the advent of virtualization in the commodity server space, this all began to change. The cost of a GUI became suddenly noticeable again. A system running twenty virtual machines would suddenly use twenty times the CPU resources and twenty times the memory and twenty times the storage capacity of a single GUI instance. The footprint of the GUI was noticeable again. As virtual machine densities began to climb, so did the relative impact of the GUI.

Virtualization gave rise to cloud computing. Cloud computing increased virtual machine deployment densities and exposed other performance impacts of GUIs, mostly in terms of longer instance build times and more complex remote console access. Systems requiring a GUI began to noticeably lag behind their GUI-less counterparts in adoption and capabilities.

But the far bigger factor was the artifact of cloud computing’s standard billing methodologies. Because cloud computing typically exposes per-instance costs in a raw, fully visible way IT departments had no means of fudging or overlooking the costs of GUI deployments whose additional overhead would often even double the cost of a single cloud instance. Accounting would very clearly see bills for GUI systems costing far more than their GUI-less counterparts. Even non-technical teams could see that the cost of GUIs was adding up even before considering the cost of management.

This cost continues to increase as we move towards container technologies where the scale of individual instances becomes small and smaller means that the relative overhead of the GUI becomes more significant.

But the real impact, possibly the biggest exposure of the issues around GUI driven systems is the industry’s move towards the DevOps system automation models. Today only a relatively small percentage of companies are actively moving to a full cloud-enabled, elastically scalable DevOps model of system management but the trend is there and the model leaves GUI administrators and their systems completely behind. With DevOps models direct access to machines is no longer a standard mode of management and systems have gone even farther than working solely from the command line to being built completely in code meaning that not only do system administrators working in the DevOps world need to interact with their systems at a command line but they must do so programmatically.

The market is rapidly moving towards fewer, more highly skilled systems administrators working with many, many more servers “per admin” than in any previous era. The idea that a single systems administrator can only manage a few dozen servers, a common belief in the GUI world, has long been challenged even in traditional “snowflake” command line systems administration with numbers easily climbing into the few hundred range. But the DevOps model or similar automation models take those numbers into the thousands of servers per administrator. The overhead of GUIs is becoming more and more obvious.

As new technologies like cloud, containers and DevOps automation models become pervasive so does the natural “sprawl” of workloads. This means that companies of all sizes are seeing an increase in the numbers of workloads that need to be managed. Companies that traditionally had just two or three servers today may have ten or twenty virtual instances! The number of companies that need only one or two virtual machines is dwindling.

This all hardly means that GUI administration is going to go away in the near, or even the distant, future. The need for “one off” systems administration will remain. But the ratio of administrators able to work in a GUI administration “one off” mode versus those that need to work through the command line and specifically through scripted or even fully automated systems (a la Puppet, Chef, Ansible) is already tipping incredibly rapidly towards non-GUI system administration and DevOps practices.

What does all of this mean for us in the trenches of the real world? It means that even roles, such as small business Windows administration, that traditionally have had little or no need to work at the command line need to reconsider the dependence on the local server GUI for our work.   Command line tools and processes are becoming increasingly powerful, well known and how we are expected to work. In the UNIX world the command line has always remained and the need to rely on GUI tools would almost always be seen as a major handicap. This same impression is beginning to apply to the Windows world as well. Slowly those that rely on GUI tools exclusively are being seen as second class citizens and increasingly relegated to more junior roles and smaller organizations.

The improvement in scripting and automation tools also means that the value of scale is getting better so that the cost to administer small numbers of servers is becoming very high on a per workload basis which means that there is a very heavy encouragement for smaller companies to look towards management consolidation through the use of outside vendors who are able to specialize in large scale systems management and leverage scripting and automation techniques to bring their costs more in line with larger businesses’ costs. The ability to use outside vendors to establish scale or an approximation to it will be very important, over time, for smaller businesses to remain cost competitive in their IT needs while still getting the same style of computing advantages that larger businesses are beginning to experience today.

It should be noted that happening in tandem with this industry shift towards the command line and automation tools is the move to more modern, powerful and principally remote GUIs. This is a far less dramatic shift but one that should not be overlooked. Tools like Microsoft’s RSAT and Server Administrator provide a GUI view that is leveraging command line and API interfaces under the hood. Likewise Canonical’s Ubuntu world now has Landscape. These tools are less popular in the enterprise but are beginning to make the larger SMB market able to maintain a GUI dependency while also managing a larger set of server instances. The advancement in these types of GUI tools may be the strongest force slowing the adoption of command line tools across the board.

Whether we are interested in the move from the command line, to GUIs and back to the command line as an interesting artifact of the history of Information Technology as an industry or if we are looking at this as a means to understanding how systems administration is evolving as a career path or business approach for our own uses it is good for us to appreciate the factors that caused it to occur and why the ebb and flow of the industry is now taking us back out to the sea of the command line once again. By understanding these forces we can more practically asses where the future will take us, when the tide may again change, how to best approach our own careers or decide on both technology and human talent for our organizations.

Understanding the Role of the Dell VRTX

Dell’s VRTX is one of those devices that is just sexy, as IT hardware goes. It strikes a chord and drives IT professionals nearly wild. It looks cool, it has an incredible amount of power, it can be rack mounted or placed under a desk, it is quiet – so quiet that it can be run right in the middle of an open office space. It’s just really cool, and nearly every IT professional wants one – even if they have no idea why.

The problem with the VRTX is that it is generally misunderstood and the misunderstandings around the device itself and the architecture used within it have led to a lot of proposals, nearly continuous, to use the device where it is least suited. The device itself truly is awesome and has excellent use cases, but it is very important to understand what they are and what they are not as this is a very specialty piece of hardware.

First, we need to determine what the VRTX “is”. The Dell VRTX is primarily a blade enclosure, more or less like any blade system. But unlike traditional blade enclosure that typically hold six to ten blades per enclosure, the VRTX only holds four. So it is a “baby” blade enclosure. Because it is a true blade system, the Dell VRTX carriers the normal caveats of any blade enclosure. However, due to its small size the probability of it being able to be used, and retired, effectively make it quite a bit more reasonable to consider than traditional, larger blade enclosures. So an understanding of its blade nature is important in evaluating it for your organization’s needs.

Along with the included blade component, the VRTX also has a DAS (Direct Attached Storage) system attached via SAS to the blades. This storage array offers either twelve large form factor (3.5”) or twenty five small form factor (2.5”) hard drives attached by way of either one or two PERC8 hardware RAID controllers. This included, large scale, shared external storage array inside of the VRTX blade enclosure is what makes the VRTX unit truly unique.

So all four blades share the single DAS unit for storage. The four blades constitute 2U of the VRTX enclosure and the DAS unit another 2U for a total enclosure size of 4U.

Of course, as with any blade system, there is no requirement that you fully populate the VRTX initially, or ever. The system can be used with any number of blades from one to four, as needed. But the value of a blade enclosure, especially a small one such as this, depends heavily on being completely populated or nearly so, to be cost viable.

Architecturally what the VRTX represents is a highly compact, single chassis, Inverted Pyramid of Doom (the “traditional” 3-2-1 architectural design) built following what are, more or less, the best approaches for that type of system. The biggest advantages here are that the use of a solid DAS is mandated and cannot be altered and all connections between the DAS and the compute nodes are hard wired internally for the highest level of potential reliability for a shared external storage system with the least opportunity for human failure. By using DAS instead of SAN in this example, our 3-2-1 has its “2” layer removed resulting in a far better inverted pyramid structure. What we are left with is a 4-1 inverted pyramid design.

The overall profile of the VRTX is one of massive compute capabilities, far outstripping the computational needs of a normal SMB business, all in a single chassis. The smallest blade option is a dual processor module and the biggest are quad processor meaning that when populated we have a minimum of eight Intel Xeon processors over four nodes and a maximum of sixteen Intel Xeon processors over four nodes. This is truly a mammoth computational system in a small package. But it is critical to understand that all of this horsepower shares a single storage array and is not highly available and cannot be made so. This is a system designed for processing power, not as a reliable infrastructure component.

It should also be noted that Dell experienced reliability issues with the redundant PERC8 hardware RAID controller setup and had to pull it from the market for some time. As with nearly all storage systems in this category, which includes many DAS and SAN devices, redundant controllers are commonly the cause of storage outages rather than the preventers of such. Redundancy of RAID controllers is rarely a valuable addition and should never be looked on as a panacea to storage reliability concerns.

Given that fact that the VRTX is compute heavy and reliability weak, what are its designated use cases? Where does it make the most sense to consider deploying this unit?

There are three extremely common deployment scenarios today where large compute and shared “fragile” storage often fit. Of course there may be many special cases and those should be evaluated individually based on the power, cost and reliability profiles of the VRTX relative to other options. But by and large the big three use cases where we would want to see the VRTX deployed would be:

Enterprise Remote Office and Branch Office (ROBO): This use case is based around the concept of the VRTX being a single device, easily deployable with nothing to do but to “plug it in” delivering a “reliable enough” but very powerful platform for remote offices. Not every remote or branch office would need the kind of horsepower than a VRTX can provide and some would require high availability which it does not have, but large ROBOs are often ideally suited to this architectural profile due to the ease of remote management and the common ability to use remote access to a central office or datacenter as a means of providing failover and reliability in the event of a major disaster either to IT itself (such as a total failure of the VRTX) or to the ROBO itself (fire, flood, etc.)

A VRTX in this scenario can easily be the sole IT device, outside of networking equipment, powering an entire ROBO of hundreds or potentially even thousands of users. And the ability to do nearly all maintenance in a non-disruptive way, which if properly designed is trivial to provide with a VRTX, can be quite significant to a ROBO.

The concept of this being solely for the “enterprise” ROBO rather than SMB ROBOs is simply because of the total scale of the VRTX being larger than the typical needs of an SMB as a whole let alone the needs of just one remote office. The VRTX is just too “big” for the typical needs of an SMB without being specifically focused on the needs of SMB.

Virtual Desktop Infrastructure (VDI): VDI generally requires a large amount of compute power, non-disruptive updates and shared storage which is perfect for the VRTX. Of course this only makes sense in shops that need at least three nodes, if not four nodes, of compute power to leverage the blade chassis natural of the VRTX. But for companies looking for eight to sixteen CPUs worth of VDI power the VRTX can be a slam dunk. Possibly no use case is more appropriate for the VRTX than as a single, modular VDI system.

Big Data: Not many SMBs look to do big data processing today (Hadoop, Apache Spark, etc.) but a VRTX can be an ideal platform for doing huge processing in a small business that does not need to scale its data processing beyond this point. For larger enterprises needing a much larger scale of processing the VRTX would not be well suited, what makes it exceptionally valuable is in matching the size to the organization’s need. Of course other kinds of computationally heavy processing, such as Monte Carlo simulations, would also work well on this platform.

Now that we know where the VRTX is well suited, where does it not fit well?

The VRTX is very poorly suited to general computing use, in both the SMB and the enterprise sectors. In the enterprise the VRTX represents a fully contained, but non-scaling, stack which would be unwieldy and expensive in a large infrastructure.

In the SMB the VRTX is dramatic overkill on the computational size while underkill, generally in reliability, on the storage side. Most SMBs, when scaling past a single computation node, are seeking both flexible scalability as well as higher than typical reliability. Often it is a desire for high availability alone that drives SMBs past a single computation node considering the incredible capacity of a single node that is available today. So moving to an inverted pyramid architecture would be counter-productive to the needs of the typical SMB. The VRTX is simply too big, too rigid and lacks the reliability profile desired by SMBs. The SMB is really the last market where I would expect the VRTX to be deployed as general computing needs that drive SMB needs simply is the farthest appropriate use case for this device.

The VRTX is an amazing piece of equipment and well designed for several niche use cases, but is not designed to replace or be used in typical scenarios where standard servers, such as the Dell PowerEdge R730, have been designed to be the ideal equipment. General use equipment exists as the industry standards and best sellers for a reason, niche equipment also exists for a reason. Be sure to understand why the equipment you are considering makes sense for your environment, new and interesting is not enough to justify moving to special case gear.

Choosing a University Degree Program for IT

In my last article I looked at the overarching concerns and approaches to an university program and how it would apply to us in IT. Now we will look at individual programs and how to approach the selection of a major and focus area within the university system.

Of actual degree programs we face a world of complexity as universities and colleges often use any variety of names for their programs of study and often attempt to use one program to teach another so a program name will often not match the actual field of study which can be very bad as you do not want to be in a position of needing to explain this discrepancy to potential employers or existing employers. An example of this was a well known northeastern school that lacked the ability to offer an IT program so relabeled their existing library science program to IT and passed that off as such for many years.

The first thing to consider is if we want to have a focused program in our field or one outside of the field. Given what we learned from the last article, that universities excel at liberal and traditional subjects and do poorly at technical ones and that our goals are to be broadly educated and not focused on specific skills, I general prefer to see students or job candidates who have been through non-technical course loads rather than technical ones.

There are any number of good non-technical programs from which to choose. Great examples include communications, business, accounting and psychology. It is good, of course, if any program includes some technical concepts such as project management and systems analysis, but these can simply be addressed through electives. It is also best if any program include studies in math, especially statistics and risk analysis, and general business classes, basic accounting and management. Students, we hope, will leave school with a firm foundation in understanding business context, people and communications because these are the soft skills that are most critical to an IT career and even moreso to an SMB IT career where there is far less departmental isolation between some tech positions and the operational side of the business.

For those that do not want to take the most liberal of paths as described above, universities often offer a large range of degrees within or near the IT discipline itself. This plethora of IT or IT-like options can often lead to confusion and risks making the selection rather dangerous as a highly technical degree that is in the wrong area of study would be the worst possible option – teaching neither IT nor teaching the broad skill set that IT practitioners desperately need. Even worse is going through the wrong field of study will often wildly mislead students as to what to expect when they enter the IT field and may actively look extremely bad on a resume as it can appear (and rightfully so in many cases) that the student did not take the time to understand their chosen field of study, know what degrees would be applicable to it and failed to realize this through years of university classes or did and did not bother to switch to an appropriate program! This is what we most want to avoid, actively bad degree programs.

To make this as challenging as possible, IT degrees often come with a variety of names. And IT degrees may be included under multiple schools or colleges within a university. Some universities have IT degrees inside of an IT school, others may have them within a more general science program, a math program or often within engineering. Some even have IT degrees under a business school. It is not unheard of for IT degrees to exist in multiple places within the same university with different foci depending on which college is administering the program.

We must also address the big question of “is software engineering and programming a part of IT?” In universities, the answer is generally yes even though in the professional world the answer is a resounding “no” – the two are clearly different fields of study and different disciplines. Software engineering is dedicated to the design and building of products. IT is dedicated to the building and support of the infrastructure of businesses. There is some overlap as any two fields might have, but they are very clearly different career fields that deal with extremely different day to day duties and tasks. It is quite common to find software engineering, developer and programmer courses and degree programs lumped into the same schools as IT or even put under an IT umbrella. This is not necessarily bad but can be quite confusing. We must be clear, however, that software engineering is not IT and any degree focused on programming should be avoided for someone with an interest in heading into the world of IT. Any respectable IT program is going to teach programming as a core foundation to the field, but the program will never be focused on it. If it is, this is a mislabeled program and should be avoided.

Proper IT programs should have names such as Information Technology, Computer Information Systems or Management Information Systems. IT and CIS programs are often interchangeable. MIS programs tend to be a subset of IT more focused on certain management-supporting aspects of IT.

Programs that are most insidious and dangerous to IT hopefuls are ones that are most closely named but least closely associated with the IT field: computer engineering and computer science. These two should never, ever cross paths with those looking for careers in IT.

Computer engineering is older than IT and is a subset of electrical engineering. This is a traditional engineering field that focuses on the design of computers and computer components (like processors, chips, boards, peripherals) themselves and has effectively no crossover with IT or any IT-related discipline in any way. Computer engineering and IT should almost never even appear within the same school or college within a university.

If software engineering (which itself is not an IT discipline but is at least closely related) is the programming world’s analogue to the world of traditional product development engineering then computer science is the programming world’s analogue to physics or mathematics. Computer science is truly a “science and math” type field, developing the theories and foundation that is then used by the software engineering discipline to build products often used and managed by the IT discipline. Computer Science, CS, is probably the most commonly mistaken field that IT hopefuls will enter and if a true CS program it is completely inappropriate and a waste of time. This is the program to look out for the most. Avoid CS completely and avoid any university attempting to pass IT programs off as CS, the two never overlap.

Do not take the selection of a university major lightly. My recommendation is to keep your selection as liberal as possible, use electives to introduce IT elements like basic programming and networking into your curriculum, fill your time with mind-broadening classes and learn about business, finance, accounting, communications, writing, speaking and statistics. Attempt to find internships or opportunities in the university to work with IT departments. Actively work to leverage your opportunities at university to make yourself as prepared as possible to focus on the specific skills of IT externally to your university training.

How to Approach the University Experience

All discussions of university versus non-university aside, once a university (or college as the Americans generally refer to it) is chosen, the next step is choosing a degree program that will fulfill our needs for our chosen profession. This, of course, is based on the presumption that our chosen profession is going to be IT. If you are not interested in a career in IT, this is probably not the article for you.

University programs can be problematic, especially in IT, because they are often mislabeled, students often do not know what area of study they are interested in before beginning their studies and those pushing students towards university are often inexperienced in IT and do not understand the relationship between specific programs and the field itself. So those directing students towards university studies with the intention of a career in IT will very often pressure them into university programs ill-suited to IT careers at all.

Two things that we need to consider when looking to choose a degree program: what universities themselves are good at providing and what will be useful to us in our IT careers.

First, where do universities shine? The university system, its very core goals and values, are often completely unknown to the general public which makes the broad use of universities a bit odd and problematic on its own. The university system was never meant to train students for specific careers but instead to introduce them to many concepts and foundational knowledge (not foundational industry knowledge you must note) and to force them to think broadly and critically. In this aspect, good universities usually shine.

It should be noted that some universities, including a very famous and well respected US university on the east coast openly stated that its mandate was not to educate or service students in any way and that students attended its schools solely to finance the professors who were its actual product – beware that your university choices see education as a goal, not a necessary evil.

Treating a university as a trade school is a fundamental mistake made by many, probably most, students. Course choices are not intended to be focused on specific skills that will be used “on the job” but on skills that will make one a more generally useful member of society. For example the intended use of a university is not to teach someone the specific ins and outs of managing Active Directory design on Windows Server 2016; that would be the job of a trade school. Instead university programs are intended to be more broadly based such as teaching data structures, authentication concepts or even more broadly in areas like writing and communications.

A student leaving university is not intended to be ready to hit the ground running in a real world job; that is not a goal of the system. Instead the idea is that the student be well versed in the necessary skills to help them learn the specifics of a job or career and be overall better suited for it. It is not about speeding someone into a career but preparing them for a lifetime in the field at a heavy cost to the short term. The hope being that either the student has no concerns with finances (the traditional amateur system) or will make up for the cost (in both hard finances and in career setbacks) of university over the span of their careers. Understanding this is key to understand how to approach university education to gain the appropriate value that we seek.

Second, What is useful education to us in our IT careers? At an early stage in our careers it is generally impossible to predict which skills are going to be the ones that we will need to leverage throughout our career lifespans. Not only do we not know what industry niches we will want to pursue, but we also have little ability to predict which skills will be needed or even exist in the future. And even furthermore nearly all people working in IT, if not every field, have little ability to totally pick and choose the area of technology in which they will end up working but will instead be required to learn the skills of the jobs that become available to them, moving through their careers more organically than in a specifically predefined way.

Because of this, as well as because of the university values mentioned above, focusing on specific technical skills would be almost wholly a waste during the university time frame. Of drastically more value to us are soft skills and more broad ones such as developing a great world view, understanding business and accounting practices and concerns, learning psychology and sociology, studying good management practices, communications and, probably above all, becoming well versed in both written and oral business communications. Companies hiring IT professionals tend to complain about the lack of these skills, not a lack of technical competence, especially in smaller businesses where nearly all IT practitioners have a large need to communicate effectively with end users and often even management. Having a broad understanding of other job roles and the overall workings of businesses has great value for IT practitioners as well. IT only exists in a business context, the firmer the grasp of that context the more value someone in IT has the potential to provide.

For the most part, what we want from our university experience actually lines up with what universities are best prepared to provide. What is least useful to us, throughout our lives, would be highly specific technical skills that are overly focused too early in our careers (or even before they have begun) and skills that would rapidly become outdated often even before leaving university.

So where does this leave us? First we should look at the broadest degree options. Whether we are beginning to look at Associates (two year) degrees or Bachelor (four year) degrees we generally have a choice of an “of Arts” or an “of Science” option and, in a few rare cases, an “of Professional Studies” option. Each of these is simply a point along a sliding scale with an Arts degree being the most liberal and focusing the least on the area of study selected. A Science degree is more focused and less liberal than the Arts degree. And the rare Professional Studies option is even more focused than a Science degree with very little liberal studies, basically the polar opposite of an Arts degree.

Of these degree options, almost universally I recommend the Arts approach. A heavy focus on specific skills is generally a poor approach to university for any degree field but in IT this is more dramatic than almost any other. Classes and coursework heavily specific are not generally useful with education becoming overly focused on a single area. A Science approach is a reasonable option, but I would lean away from it. The Professional Studies approach is a clear attempt to mimic a trade school program and should be avoided both because it is a very poor use of university resources as well as being so rare that it would require regular explanation whenever a new person encountered it.

Staying highly liberal with our studies provides the best overall benefit from the university experience. Not only does it let us best leverage what the university offers but it also gives us the best foundation for our careers. There is also a hidden benefit, and that is career risk mitigation.

Career risk mitigation here refers to our university training not being overly specific so that should we decide later that IT is not the field that we want to pursue or after some time that it is not the career in which we want to remain that our education supports that flexibility in an effective way. Perhaps our IT careers will lead us into management or entrepreneurship. Or maybe our IT experience will be in a field that we end up enjoying more than IT. Or we might live in a place where our IT opportunities are few and other opportunities exist. There are myriad reasons why having a broad, flexible education isn’t just the best for our IT careers but also the best for our non-IT careers.

Thinking about how university works and understanding its core goals and how they apply to ourselves is the first step in being prepared to leverage the university experience for optimum value.