NeuroBlade tackles memory and bandwidth bottlenecks with XRAM
Businesses frustrated with the performance of memory or network related analytics workloads may be interested in learning more about NeuroBlade, a hardware startup developing a new in-memory processing (PIM) architecture called XRAM that includes a processor. RISC merged with DRAM itself.
According to Elad Sity, co-founder and CEO of NeuroBlade, NeuroBlade has redesigned the memory architecture itself to allow the insertion of small, programmable processing cores tightly coupled to DRAM.
“The cores that are built into NeuroBlade XRAM memory are non-standard RISC processors designed by NeuroBlade itself,” Sity said. Datanami by email. “NeuroBlade [software] stack and SDK [software development kit] Hide the complexity and programming model of these cores from the end user and also optimize multiple cores for maximum parallelism and efficiency.
NeuroBlade has also created a server appliance, or what it calls a memory intensive processing unit (IMPU), called Xiphos, which includes these XRAMs. The Xiphos can be connected to a standard host within the data center, Sity says, and comes with all the software needed to speed up data analysis workloads.
With XRAM and Xiphos, NeuroBlade targets data-intensive workloads that are limited by the amount of memory or I / O available, as opposed to compute-limited workloads, for which CPUs and GPUs are fine. adapted, explains Sity. The company says its approach has the potential to increase the speed of scans by 100x.
“Data analysis, or more specifically SQL workloads, is NeuroBlade’s primary target, although other data-intensive workloads will follow soon,” says Sity. “In a typical system, the CPU and memory are separate from each other, so when data needs to be processed, it needs to be moved from memory to the CPU. In the data-intensive applications, which we focus on, the processing of data is quite fast and, therefore, the bottleneck for fast overall processing is not the power of the processor but rather the speed at which the data is processed. data is transferred between memory and processor. . “
NeuroBlade isn’t the first company to adopt the PIM approach, but it claims it is the first company to actually put a PIM solution into production. The company, which was founded in 2018 and has more than 100 employees, says it is already shipping its data accelerator to customers and partners around the world, and says its equipment is seeing SQL action with companies across industries. healthcare, pharmacy, finance, advertising, and cybersecurity sectors.
Last week, the company announced an $ 83 million Series B round, bringing total outward investment to $ 110 million. The round was led by Corner Ventures with input from Intel Capital, and supported by current investors StageOne Ventures, Grove Ventures and Marius Nacht. MediaTek, Pegatron, PSMC, UMC and Marubeni also provided funds during this cycle.
Lance Weaver, Intel’s vice president and general manager of its data center and cloud strategy, said the company is proud to invest in NeuroBlade and its technology. “Despite being tested like never before last year, the data center has kept the world up and running at a critical time,” Weaver said in a press release. “We believe this market is poised for explosive growth and NeuroBlade looks to have a promising course ahead. “
SAP is another early partner of NeuroBlade. Patrick Jahnke, head of the innovation office at SAP, says the enterprise software giant is eager to work with NeuroBlade to accelerate database management system (DBMS) workloads.
“Performance projections and the breadth of use cases demonstrate great potential for significantly improving DBMS performance with higher energy efficiency and lower total cost of ownership on-premises and in the cloud,” Jahnke said in a press release. “With this exciting collaboration with NeuroBlade, SAP will open up new possibilities for building the data center of the future. “
VMware seeks to free memory bottlenecks with Project Capitola
Filling persistent gaps in the era of “great memory”
The past and future of computing in memory