Coupling & Crosstalk: Look beyond the small screen to get the big picture!

Coupling & Crosstalk is my column in the MEPTEC Report. This column appears in the Spring 2019 edition on pages 8-9.

Electronic coupling is the transfer of energy from one circuit or medium to another. Sometimes it is intentional and sometimes not (crosstalk). I hope that this column, by mixing technology and general observations, is thought provoking and “couples” with your thinking. Most of the time I will stick to technology but occasional crosstalk diversions may deliver a message closer to home.

Look beyond the small screen to get the big picture!

The electronics industry is in a transition forced by rapid changes in computer hardware and concepts. However, there is even more change on the horizon and this historical perspective can help you to understand and plan for the future.

The earliest ‘big iron’ computer systems were built to solve specific scientific and computational problems. UNIVAC in the 1950s, marked the beginning of general purpose ‘commercial’ computing systems.  The software applications were limited by the capability of the general-purpose mainframes and minicomputer hardware. As hardware became faster, enabling more powerful software, rapid adoption occurred in many industries. But for non-scientific use, hardware fully defined and limited the practicability of software and applications.

In the 1980s and 90s, computing shifted from mainframes to personal computers (PCs). The economies of scale made it feasible for individuals to have their own computers. And widespread corporate deployments to the desk of each employee quickly followed since the productivity gain far exceeded the cost. Computing moved out of the data center and was fully decentralized with this new model of personal computing. Users started to demand and define their own needs! As a result, software engineers responded to these needs within the limitations of the hardware.

Then, as today, PC manufacturers relied on Intel to supply the research and development (R&D) to drive improvements in the system architecture. These suppliers have simply become ‘box assemblers’ competing in the market with the lowest cost brand name or white box system. In the end, Intel sets the product specifications and features of their microprocessors which in turn defines the system’s performance. However, applications are still limited by the available computational hardware and architecture.

With annual volumes approximately five times that of PCs, smartphones are currently the engine powering semiconductor industry growth. Within the robust competition in the smartphone marketplace there is a never-ending quest for improved functionality (including increased processing power, better quality displays, and longer battery life) and product differentiation to keep or gain market share. Even with such hardware improvements the main operating systems, Android and iOS, are still limited by the hardware functionality.

Today smartphones are where the consumer sees innovation taking place. Users are enthralled as new features such as 3D face identification and folding displays are added. Looking ahead the baseband processor and associated radio frequency (RF) ‘front end’ will need to change significantly to support 5G. Not to mention if / when millimeter wave (mmWave) mesh networking is implemented as part of 5G. And there continues to be advanced packaging innovation including 2.5D and 3D integration, wafer level chip scale packaging (WLCSP), and panel level processing (PLP) to improve performance and lower costs.

From a system architecture perspective however, smartphones and PCs due to their general-purpose nature have become static. The core system design of a smartphone is not much different than a forty-year-old PC design centered around a processor with a traditional von Neumann architecture. So, where is the real innovation in system architecture and semiconductors occurring? It’s in the cloud! And the cloud has flipped the paradigm in terms of the application defining the hardware instead of being limited by it.

Regardless of what you call them, the ‘Super 7’ (Intel’s term) or the Hyperscale 8 operate data centers on a scale orders of magnitude larger than other companies. Alibaba, Amazon, Apple, Baidu, Facebook, Google, Microsoft, and Tencent all have hyperscale data centers. Operating on the scale of millions of servers has required significant engineering at all levels to cost-effectively build and run these data centers.

These hyperscale companies have also pared back their computing equipment to only the essentials, eschewing any non-essential feature and eliminating all cosmetic items. Never going to attach a display to a server? Eliminate the display driver circuitry. Plastic bezels or fancy sheet metal? Gone. In this vein, the hyperscale companies have developed their own supply chain using electronic manufacturing service (EMS) providers to build their own systems and bypass traditional server companies like Hewlett-Packard and Dell. The scale of their purchasing makes it economical to obtain servers and other equipment with just the minimum required features at the lowest possible cost. Facebook and Microsoft have gone one step further by setting up the Open Compute Project to ‘open source’ their hardware designs to further increase innovation and economies of scale.

Beyond operational and supply chain ‘improvements’, significant investments have also been made by the hyperscale operators in new types of computational architecture and hardware. They have developed “private hardware” to enable specific end applications. For example, the deployment of machine learning has required substantial additional computing power. Graphical processor units (GPUs) have been successfully tasked with some of this computing load. So well in fact that Nvidia has repositioned its GPU products and company from being a graphics card provider to a machine learning company that also makes graphics cards.

Microsoft, Facebook, and others have turned to field programmable gate array (FPGA) co-processors to further accelerate machine learning computing. And Google calculated in 2013 that it would need to double their data centers to handle their machine learning load if they didn’t change their computing equipment. So Google developed their own application specific integrated circuit (ASIC) with a unique architecture suited for neural networks. Their initial Tensor Processor Unit (TPU) provided an “order of magnitude” greater computing power for machine learning applications per watt of power than a traditional server. And Google has provided customer access to their third generation TPUs via their cloud services.

Facebook’s Chief Artificial Intelligence (AI) Scientist, Yann LeCun, at the recent IEEE International Solid State Circuits Conference (February 2019), described how they need new machine learning (deep learning) hardware to continue to make advances. He stated that different processor architectures are needed along with changes in the way arithmetic is done in circuits to increase efficiency. Not only is Facebook discussing basic circuity, they have created their own in-house chip design team to do the research and development to build the hardware they desire or require.

Why does it make sense for a ‘social media’ giant and the other hyperscale cloud operators to go all the way down to the gate level on their hardware? Their computational needs are very specific and they must obtain greater efficiency than can be provided by general purpose computing hardware. Since they operate on such a large scale – both in terms of computing units purchased and power consumed – they can justify the tens to hundreds of million dollar investments to build private hardware including custom leading-edge ASICs. In an effort to gain an advantage over their competitors, the hyperscale companies will continue to push non-traditional architectures and other proprietary innovations.

The new frontier of computing innovation now resides in the hyperscale data centers. The pendulum has swung back to the ‘big iron’ from the distributed computing environment with a twist. “Software” and “social media” companies now need solution architects and hardware experts to develop the most efficient computing platforms to serve the needs of their applications. And hardware companies need consultants and connections to the hyperscale companies to understand the needs of the applications to properly position their technology for consideration. Unlike consumer and commercial products, we will only see the details of these proprietary solutions when the cloud providers wish to let the sun shine through.

While most people were busy staring down at their smartphone screens, they didn’t glance up to see the concentration of computing power move to ‘the cloud’. Now you need more – the ability to see the direction the wind is blowing these clouds and what can be done to “make it rain” on your company!

As always, I look forward to hearing your comments directly. Please contact me to discuss your thoughts or if I can be of any assistance.

%d bloggers like this: