The Future of Computing Infrastructure in Two Easy Steps
Aleksandr Zhuk
October 21, 2018
In the early days of automobile, to operate a car a driver had to be an engineer intimately familiar with the structure and principles of operation of the steam and later internal combustion engine. This was the result of asymmetry in the load distribution in the classic People (Driver), Processes, and Technology triangle. The technology was advanced for its time, yet so rudimentary that it was unable to assume control of any of the processes associated with maintaining the car in optimal operating condition. It was up to the person operating the steam engine powered vehicle to keep an eye on and carefully listen for any signs of trouble.
DeDion Steamer, 1888, “The World’s Oldest Production Car,” on Display at the Ellenville Public Library & Museum, Ellenville, NY
Relatively similar distribution of the process load pertaining to the driving experience continued for the first 30-50 years of automobile technology evolution. With time, however, increasingly sophisticated automated systems took on the responsibility for the process control functions of driving an automobile. When a driver of a modern car turns the steering wheel, dozens of sophisticated computerized functions make split-second decisions about the road conditions, differential torque application, car velocity and so on—to execute a seemingly simple trajectory adjustment maneuver that the driver requested.

Operating a modern car has become a “driving by wire” computer-controlled experience. The final frontier in car operation experience is the fully automated autonomous vehicle—a modern reality which transfers all driving process control functions to the Technology side of the Person, Process, Technology triangle transforming the human driving experience into that of riding a car—passively enjoying the commute experience, while the autonomous vehicle does all the driving. At this phase of automobile evolution, the person riding the autonomous vehicle is not expected to know anything about how the technology and the automated processes work. All the passengers need to know is where they are going and being able to convey it to the automated car—using an app or simple vehicle control interface, which is likely to support natural voice instructions. Human control functions pertaining to autonomous car driving experience evolved from having to worry about countless tactical questions of what turn to take and when to slow down to having to focus solely on the strategic ones: “Where do we want to go today?”

Human control functions pertaining to autonomous car driving experience evolved from having to worry about countless tactical questions of what turn to take and when to slow down to having to focus solely on the strategic ones: “Where do we want to go today?” I see the same fascinating trend unfolding in the evolution of computing infrastructure.
I see the same fascinating trend unfolding in the evolution of computing infrastructure. In the intricate interplay of cloud vs. on-premise components, the increasingly blurry lines between the two infrastructure hosting options are bound to ultimately disappear altogether in the process of innovation cross-pollination and technology evolution. Even today the most sophisticated on-premise infrastructure systems mirror the functionality and flexibility of private cloud. I can see the time when robust cloud-in-a-box systems will be small enough to carry in one’s pocket.

For all its apparent sophistication of ready-to-use technology choices, dynamic scale, high-availability, and cost efficiencies, however, modern computing infrastructure still requires its consumers to possess system architecture level skills—at least during the planning and design phases. Whether on-premise or in the cloud, the relative complexity of properly putting the building blocks together and operating them as a well-tuned computing system, is still very high. As a result, the systems infrastructure professionals with cloud architecture and support skills are in high demand today, especially those who also possess cloud/on-premise infrastructure hybrid management and migration experience. This is a transient development phase, however, which may last another 5-15 years, but will ultimately give way to a fundamentally new computing infrastructure architecture and management experience.

The logic for the inevitable transformation is quite simple. As the Harvard Marketing professor Theodore Levitt famously remarked, “People don’t want to buy a quarter-inch drill. They want a quarter-inch hole.” Sophisticated as their skills might be, all the customer-retained cloud and on-premise infrastructure architects and support engineers give business “a [costly] drill” (e.g. a computing infrastructure to enable business processes), but what the business really wants is “a hole” (e.g. an online store or a customer service portal).

Business-driven VR-based system design.
Image courtesy of the ITProPortal

The next step in the computing infrastructure evolution, which I call Automated Interactive Development or AINTDEV (pun intended), will consist of two closely linked yet distinctive phases. The two phases will be linked by a new set of automated processes and technologies collectively enabling what I call Automated Cloud Architecture as a Service (ACLAAS). ACLAAS will transfer all infrastructure building process control functions to the technology side of the People, Process and Technology triangle. The technology needed to enable ACLAAS is already in place including AWS CloudFormation,  AWS Lambda, and Amazon Lex added to the mix to provide a human-friendly system interaction engine.

In the first phase of ACLAAS technology development, the human builder will still need to know the key principles of “operating” a computing infrastructure (think of knowing how and why to use the gas and the brake pedals, the steering wheel, and the turn signals of a car). In this phase of ACLAAS evolution the architect-operator of the system with be expected to issue instructions like: “Distribute my load among three availability zones and place my databases in these private security groups. Place the web server and the back-end database into different security groups and install a load balancer between the database instances one and two. Connect persistent storage to EC2 instances one, two, and three.” This capability is almost available today. It just still has very cumbersome programmatic and relatively rudimentary complex user interface. By the end of the first phase of the ACLAAS evolution, graphic VR 3D-enabled drag-and-drop voice command enhanced interface will likely replace all the cumbersome and clunky tools used today.

In the second and final phase of ACLAAS technology development, the human operator instructions will change from technology-specific tactical choices to much more strategic business-driven prompts. The system user will be able to issue natural voice “driving instructions” like: “I want to build an online commerce website to sell X, Y, and Z. Eventually, I want my online store to have three regional language-localized portals in the US, France, and China. The system must be able to handle 25,000 visitors a month on day one and be able to scale easily, as the sales grow. We cannot be down for more than five minutes a year and we can only loose X minutes of customer transaction data. Our budget for this system is $X.” The “Gen 2” ACLAAS system will then be able to consider the instructions provided and either present one or more solution prototypes or point out the changes that would need to be made to make it possible: “To stay within your initial budget, I recommend limiting the initial deployment to a single web commerce portal—in the US. Then, let’s integrate financial performance metrics as a trigger to scale the site up to the next phase. I propose that when the US sales volume reaches $1 million a month for three consecutive months, the system will auto-deploy the European portal.”

Given the powerful business drivers fueling its evolution, I am quite certain that this is the fascinating future of business computing infrastructure. What does it mean for IT system architects and the businesses of the future? More than ever before, business side of the IT systems architecture will be about solving real business problems. This ongoing shift will continue to emphasize a wide range of business skills from those who call themselves technology infrastructure architects today, but will join the widening ranks of business technologists tomorrow. Increasing complexity of the underlying technologies will also continue to place a high price on the engineers who will build the sophisticated intelligent private and public cloud systems of the future. The technology infrastructure technologies will continue to evolve as an independent industry with a widening skills divide between the technology’s consumers (businesses), who will be able to focus on their business objectives, and the ever more technical designers, manufacturers, and vendors of the cloud and other related technologies.

Share This