A Silicon Valley-based startup has recently emerged from stealth mode to reveal what it claims is the smallest, most pixel-dense dynamic display ever built. Mojo Vision’s display is just 0.48 millimeters across, but it has about 300 times as many pixels per square inch as a typical smartphone display. The display used microLED technology instead of OLEDs (as in several generations of Samsung devices and the iPhone X) or an LCD (as in every other iPhone). Made from gallium nitride, microLED displays can consume as little as 10 percent of the power of LCDs and are 5 to 10 times as bright as OLEDs. That combination makes them a good fit for head-up displays and other augmented reality applications.
AMD certainly got an edge over Intel at Computex in Taipei this week, announcing pricing and availability of the third generation Ryzen processors, which are based on its 7nm Zen 2 core. Meanwhile, Intel announced shipments of its 10nm Intel Core Ice Lake processor and that it is bringing artificial intelligence (AI) to the PC. And, Nvidia revealed its EGX edge AI platform.AMD claims the new Zen 2 core features up to 15% estimated instructions per clock (IPC) uplift over its predecessor Zen architecture. Powering both the AMD Ryzen and EPYC processors, it also offers significant design improvements, including larger cache sizes and a redesigned floating-point engine.
Mediatek rolled out its first 5G SoC, built on a 7-nm FinFET process and featuring the first implementation of Arm’s Cortex-A77 CPU and Mali-G77 GPU, as well as an integrated Helio M70 5G modem featuring download speeds of up to 4.7 Gbits/s. The multi-mode 5G SoC also supports 2G, 3G, and 4G and includes a new AI processing unit for advanced AI applications support. The 5G SoC is the result of Mediatek’s aggressive push to get out of the gate early on 5G. Executives say that the company has spent about $1.5 billion per year over the past five years on R&D.
Rising systemic complexity and more potential interactions in heterogeneous designs is making it much more difficult to ensure a chip, or even a block within a chip, will functioning properly without actually monitoring that behavior in real-time. Continuous and sporadic monitoring have been creeping into designs for the past couple of decades. But it hasn’t always been clear how effective these approaches are, how much they will cost in terms of resources, and how far afield of a block or a chip these kinds of techniques should extend. This is especially true for safety- and mission-critical systems, where a design needs to be fully functional for extended periods of time, and in systems of systems, where the context extends well beyond the individual chips being designed.
While artificial intelligence and machine learning computation is often performed at large scale in datacentres, the latest processing devices are enabling a trend towards embedding AI/ML capability into IoT devices at the edge of the network. NXP showed a module for exactly this purpose at the Microsoft Build developer conference earlier this month. The 30x40mm board combines a handful of sensors with a powerful microcontroller – the i.MX RT1060C, a high performance Cortex-M device which runs at a very fast 600MHz – plus connectivity capability. The NXP module can run machine learning either completely locally, with training and inference both taking place on its MCU, or it can connect to Microsoft Azure and send all data into the cloud for training, inference, or both..