I ended my previous post with a comment about the underlying trend in multicore being upwards, and inexorably so. In this piece I want to look at hardware and where we are headed.
I started that piece by looking at how we got here, and why. The market splits in two parts, driven by very different requirements. On the one hand there is the general, server and higher performance systems direction (sometimes called "general purpose" computing) and on the other the all-pervasive, and numerically very much larger, embedded systems market. To some extent the story of the two is the story of the exchange of technologies, but multicore has meant closer convergence. However embedded is, and will continue to be, driven by the need to deliver hardware which has a long reliable life and which will continue to do so often in an unremittingly harsh environment.This will continue to drive the way it adopts multicore and this will be different in some respects from the general purpose market.
In neither sector, by and large, are the multicore engines that have already been announced anywhere near engineering limits in terms of device count and are also below the number of cores associated with a simple Moore's Law-style doubling. So there is plenty of headroom for development. The ratio of memory to processor real estate continues to have a major impact and needs to be addressed. The solution, if one comes, may well be radical.
Expect the major hardware players still to be the major players at the end of the decade. Look out too for start-ups with novel technologies, principally in architectures - and memory. With major companies focused elsewhere, assuming financial difficulties continue awhile, there will be an opening for startups. Change in technologies will mean that some companies will licence-in technologies as the market changes. This will mean that companies that can offer the flexibility that say ARM can, stand to benefit.
Starting with this year, then:
GPGPUs to continue to grab attention - and growth
GPGPUs (General Purpose Graphics Processing Units) are just one example of crossover. They have really taken off in the last few years for those seeking a lot of performance in a relatively cheap package and who are prepared to take the time to programme them. NVIDIA have dominated and will continue to dominate in the near future. The demise of the GPGPU seems unlikely for a few years and doubtless this year will see more software tailored towards easing their use. However don't expect to see their widespread adoption in the domestic market place or in the office yet awhile!
On the back of this GPU manufacturers will undoubtedly see added growth, but expect that growth to slow in future years as the market matures - and GPGPU usage possibly alters.
AMD will respond to Intel's large multi-cores
The Intel/AMD struggle will continue with the arrival of samples of Knight's Corner and its relatives enabling the mainstream to look at the real issues in dealing with large scale multicore systems late 2011/early 2012. The result will be more complex than just running more copies of the same code...this is something I will touch on in later pieces. The big issue will be how AMD - and indeed others - will respond to the move by Intel towards systems with several tens of cores. At present they are missing an obvious line of response. However AMD are technically no slouches, so there will be a response. So we can expect some moves to reply, perhaps later in the year.
Looking further forward to the next decade...
Ever-increasing device counts will mean that by the end of the decade we will actually reach terascale levels (i.e. 1012 active devices per chip) in a readily-deployable form. This capability will in turn mean the application of compute technologies to new classes of problem, whether in general purpose or in embedded processing. This will be software-driven and is a topic I will return to in a subsequent post. In some sub-sectors we may see a slowing of the rate of increase in core count for embedded systems as robustness requirements drives demand.
Rising core counts and architectural changes
Unsurprisingly perhaps, I think that core and device counts will continue increase. Of course, the two don't necessarily go hand in hand. Changing architectures may well change that. Cores will probably be a heterogeneous mixture, providing the ability for manufacturers to readily customise their offerings for specific markets. During the last few years of the decade and into the early 2020s the design of cores will change dramatically in all probability, although IA-compatibility will probably remain at least nominally for a while yet. Look for ideas from new startups as well as players who, though not major players, bring forward new technologies.
Look for new technologies to allow greater density
With all this will come new technologies such as 3D stacking, which, while they have been around since the 1980s, haven't been exploited in mainstream production. Increasing device density will make new types of structure imperative. At the time of writing Xilinx have just announced that they are going to use 3D stacking in some FPGAs that will sample at the end of the year. Higher densities may well need this (3D) and other technologies, but don't expect to see mainstream products from the likes of Intel and AMD using 3D stacking or alternative approaches until past mid-decade.
Faster interconnects in mid-decade
With higher core counts will go faster interconnects. There are already a number of interconnect companies promising new classes of optical-based technology by the middle of the decade. Look for a substantial shift in technologies in mid-decade; say around 2015 - 2017.
With substantially greater compute power will go a need for much larger off-chip storage and the ability to address it, a lot of it, rapidly to feed the processors. Expect new takes on existing technologies, rather than wholesale re-thinks.
New memory technologies
The rise in core numbers will fuel a need for faster memory access in order to feed those (very) fast engines. That means that hard disks and solid state memories will have to work hard to keep up. Look for providers of both to look for novel solutions, probably early samples of which will appear in mid-decade as compute demand for data takes off.
So as a final prediction in this section, what might tomorrow's chip look like? If we look towards 2021, we might expect to see a large number of cores, perhaps several hundred of them, but in a heterogeneous array with a primary resource of one or two hundred cores surrounded by others offering specific capabilities. The primary cores will each have a relatively small amount of localised high access-speed memory. The number of processors active and their availability will be managed to provide redundancy and accommodate failure. Individual core architectures may be less complex, more RISC-like than at present to lower device counts and reduce individual core power-consumption. The whole will be linked by optically-based switched networks allowing arbitrary connection to all processors. Access to off-chip memory will be fast, in order to keep demand on-chip data satisfied. Overall lower power demand will be assisted by the ability to power-down cores which are not in demand.
To summarise. The market is going to be working hard over the next few years to find novel solutions. For this reason watch for small companies offering novel solutions to new problems. Don't expect these to come from traditional companies, nor necessarily from the obvious countries. There are a lot of issues in a chip such as the one described above and these need solutions and the best may well be found outside the industry. In order to make a mark in this market there will have to be a great deal of thinking out of the box.
This means a lot of opportunities for fledgling companies and could even lead to a renaissance in the industry. But don't expect a new fully-fledged Intel to arrive from out of the blue either. Those things take a lot of time!