Outdated x86 CPU/NIC architectures bottleneck AI's power, limiting true Generative AI potential. NeuReality's groundbreaking NR1® Chip combines entirely new categories of AI-CPU and AI-NIC into one single chip, fundamentally redefining AI data center inference solutions. It solves these bottlenecks, boosting Generative AI token output up to 6.5x for the same cost and power versus x86 CPU systems, making AI widely affordable and accessible for businesses and governments. It works in harmony with any AI Accelerator/GPU, maximizing GPU utilization, performance, and system energy efficiency. Our NR1® Inference Appliance, with its built-in software, intuitive SDK, and APIs, comes preloaded with out-of-the-box LLMs like Llama 3, Mistral, DeepSeek, Granite, and Qwen for rapid, seamless deployment with significantly reduced complexity, cost, and power consumption at scale.

Moshe Tanach
Moshe Tanach is Founder and CEO at NeuReality.
Before founding NeuReality, he served as Director of Engineering at Marvell and Intel, leading complex wireless and networking products to mass production.
He also served as Appointed Vice President of R&D at DesignArt-Networks (later acquired by Qualcomm) developing 4G base station products.
He holds Bachelor of Science in Electrical Engineering (BSEE) from the Technion, Israel, Cum Laude.
NeuReality
Website: https://www.neureality.ai/
Founded in 2020, NeuReality is revolutionizing AI with its complete NR1® AI Inference Solutions powered by the NR1® Chip – the world's first true AI-CPU built for inference workloads at scale. This powerful chip redefines AI by combining computing—6x more powerful than traditional CPUs—with advanced networking capabilities in an AI-NIC, all in one cohesive unit. This includes on-chip inference orchestration, video, and audio capabilities, ensuring businesses and governments maximize their AI hardware investments.
Our innovative technology solves critical compute and networking bottlenecks where expensive GPUs often sit idle. The NR1 pairs with any AI accelerators (GPUs, FPGAs, ASICs), super boosting their utilization to nearly 100% from <50% today with traditional CPUs. This unlocks wasted capacity, delivering superior price/performance, unparalleled energy efficiency, and higher AI token output within the same cost and power.
The NR1 Chip is the heart of our ready-to-go NR1® Inference Appliance which can be built with any GPU. This compact server comes preloaded with our comprehensive NR Software suite, including all necessary SDKs and Inference APIs. Furthermore, it's equipped with optimized AI models for computer vision, generative AI, and agentic AI, featuring popular choices such as Llama 3, DeepSeek, Qwen, and Mixtral. Our mission is to make the AI revolution accessible and affordable, dismantling the barriers of excessive cost, energy consumption, and complexity for all organizations.