New Things You Can Do With Big Memory
MemVerge Brings Intelligent Scheduling and Resource Management to AMD Instinct GPUs
After rigorous testing, AMD's cloud infra team gave MemVerge’s solution high marks for its mature, intuitive user interface and enterprise-grade scheduling, resource sharing capabilities. "Our partner ecosystem is critical to the success of the AMD Instinct business,”...
One Memory, Any Model: Why AI’s Next Breakthrough Isn’t a Bigger Model—It’s Remembering Who We Are
Executives shouldn’t have to spend weeks vetting AI agencies. Thanks to my partnership with Clutch, I can now spotlight agencies that deliver tangible results, helping companies confidently invest in AI with providers I personally trust. #ClutchPartner In today’s...
In-Weight Learning vs. In-Context Learning: Lessons from Human Psychology for AI
The past few years have taught us new terms to describe how large language models (LLMs) learn and adapt. Two of the most important are in-weight learning and in-context learning. Understanding the difference between them not only clarifies where we are with AI today...
Why AI Needs Memory
In the 1980s, supercomputers captured the imagination. They could model nuclear reactions or simulate the weather—but only if paired with storage. Without disks to hold results, every run would vanish, forcing scientists to start over. Compute without storage was raw...
MemVerge Launches MemMachine
World's most Powerful AI Memory Layer Responding to over $13 billion in demand for AI memory infrastructure across clouds, models, and enterprise environments. MILPITAS, CA – September 23, 2025 – MemVerge, the leader in Big Memory software, today announced the launch...
AI Memory, The Next Frontier
Over the last two years, the AI conversation has been dominated by models and compute. Bigger models, faster GPUs, cheaper inference. Necessary—but not sufficient. If we want AI that is genuinely useful at work and trustworthy in the enterprise, we need to confront...
Build an Enterprise Memory Vault with MemVerge.ai Intelligent Memory Software. Available in the new AWS Marketplace AI Agents and Tools Category.
MILPITAS, Calif., July 16, 2025 /PRNewswire/ -- MemVerge, a leading provider of software for AI infrastructure today announced the availability of MemVerge.ai Intelligent Memory in the new AI Agents and Tools category of AWS Marketplace. Customers can now use AWS...
How a Machine Learning Expert thinks about RAG vs Fine-tuning
by Since OpenAI introduced ChatGPT in December 2022, the world has been swept up in the wave of Generative AI. Enterprises are now actively exploring how to leverage AI to increase productivity, streamline operations, and gain a competitive edge in an increasingly...
What Does DeepSeek Mean for Enterprise AI?
Since OpenAI introduced ChatGPT in December 2022, the world has been swept up in the wave of Generative AI. Enterprises are now actively exploring how to leverage AI to increase productivity, streamline operations, and gain a competitive edge in an increasingly digital-first world. Software development, IT management, and customer support were among the first to feel the impact.
MemVerge at AI Field Day 6
MemVerge at AI Field Day 6AI Field Days are events produced by The Futurum Group that connect independent technologists with the latest developments in the field from the companies that are developing and applying AI to IT infrastructure. On January 29 at the AI Field...
Accelerating Data Retrieval in Retrieval Augmentation Generation (RAG) Pipelines using CXL
RAG (retrieval augmented generation) has emerged as a powerful technique to customize LLMs for users and use cases beyond the model’s training set. However, there are multiple potential bottlenecks within a RAG pipeline.
Announcing Memory Machine Cloud 3.0 Imperia
I am very excited to announce that Memory Machine Cloud (MMCloud) Imperia (3.0) is now generally available. Our vision for MMCloud is that it delivers unparalleled resource efficiency for compute intensive workloads and extreme ease-of-use for both Cloud admins and...
Introducing Weighted Interleaving in Linux for Enhanced Memory Bandwidth Management
With the release of Linux Kernel 6.9, system administrators have gained a powerful new tool for managing memory distribution across NUMA nodes: Weighted Interleaving. This feature is especially beneficial in systems utilizing various types of memory, including traditional DRAM and Compute Express Link (CXL) attached memory. In this article, we’ll explore Weighted Interleaving, how it works, and how to use it.
Unleashing the Future of Memory Management: Exploring CXL Dynamic Capacity Devices with Docker and QEMU
In the ever-advancing realm of technology, developers and application owners always look for innovative tools and methodologies to boost performance and scalability. A revolutionary stride in this direction is the integration of Compute Express Link (CXL) technology, particularly through the utilization of Dynamic Capacity Devices (DCD). CXL, an open standard for high-speed CPU-to-device and CPU-to-memory interconnects, substantially enhances data center and cloud environments, offering many benefits.
Memory Wall, Big Memory, and the Era of AI
In the fast-evolving landscape of artificial intelligence (AI), where models are growing larger and more complex by the day, the demand for efficient processing of vast amounts of data has ushered in a new era of computing infrastructure.
MemVerge and Micron Boost NVIDIA GPU Utilization with CXL® Memory
The joint solution increases GPU utilization by 77% and more than doubles the speed of OPT-66B batch inference. San Jose, CA – March 18, 2024 – MemVerge®, a leader in AI-first Big Memory Software, has joined forces with Micron to unveil a groundbreaking solution that...
MemVerge and Sentieon Announce WaveRider for Sentieon to Accelerate Next-Generation Sequencing in the Cloud
Early Customers Realize 10x Increase in Performance and Cloud Cost Savings; Sentieon Software Offered Free in Memory Machine Cloud Subscription MILPITAS, Calif. – November 14, 2023 – MemVerge®, pioneers of Big Memory software, and Sentieon®, the market leader in...
Samsung, MemVerge, H3 Platform, and XConn Demonstrate Memory Pooling and Sharing for “Endless Memory”
Joint demonstration eliminates Out-of-Memory errors to accelerate AI/ML workloads SANTA CLARA, Calif. – Aug 8, 2023 – Samsung, MemVerge, H3 Platform, and XConn, today unveiled a 2TB Pooled CXL Memory System at Flash Memory Summit. The system addresses performance...
Emulating CXL Memory Expanders in QEMU
Build and install a working branch of QEMU. Launch a pre-made QEMU instance with a CXL Memory Expander. Create a memory region for the CXL Memory Expander. Convert that memory region between DEVDAX and NUMA Modes.
MemVerge and SK hynix Accelerate Memory Pooling and Sharing Software Development with CXL Flight Simulator
The companies contribute to the open-source community and deliver pre-built QEMU virtual machines that emulate CXL memory pool devices MILPITAS, Calif. – August 2, 2023 – MemVerge®, pioneers of Big Memory software, today introduced QEMU-based CXL Flight Simulator, a...
Emulating CXL Shared Memory Devices in QEMU
Build and install a working branch of QEMU. Launch a pre-made QEMU lab with 2 hosts utilizing a shared memory device. Access the shared memory region through a devdax device, and share information between the two hosts to verify that the shared memory region is functioning.
MemVerge Unveils Memory Machine Cloud for Sustainability and a 3-Step Blueprint for Slashing Carbon Footprint
Delivering fine-grained observability of CO2 emissions of every workload, and providing automation for achieving sustainable cloud computing MILPITAS, Calif. – July 26, 2023 – MemVerge®, pioneers of cloud automation 2.0 software, today announced the general...
MemVerge Introduces Memory Machine Cloud: Unleashes the Power of Cloud Automation 2.0 So Users Can Seize Control
Automation platform empowers non-expert users to control cloud deployments, availability of Spot instances, and right-sizing of resources. Memory Machine Cloud Essentials is free forever. MILPITAS, Calif. – June 22, 2023 – MemVerge®, a cloud automation pioneer, today...
MemVerge Unveils World’s First CXL-Based Multi-Server Shared Memory at ISC
Project Gismo accelerates distributed applications by eliminating the IO wall consisting of network IO and data copies Hamburg, Germany – May 23, 2023 – MemVerge®, a leading innovator in Big Memory Software, is proud to announce the groundbreaking launch of Project...
MemVerge and SK hynix announce Endless Memory
Technology preview demonstrates a co-engineered system that never runs out of memory Hamburg, Germany – May 22, 2023 – MemVerge® today announced that the company and SK hynix Inc, two leading innovators in Big Memory solutions, have launched Project Endless Memory, a...
At MemCon, MemVerge Demonstrates How Compute Express LinkTM (CXLTM) Memory Expansion Will Close the Gap Between CPU and Memory Performance
CXL memory bandwidth and capacity expansion boost OLTP benchmark results MemCon, Mountain View, Calif. — March 28, 2023 — MemVerge®, pioneers of Big Memory software, is attending MemCon to demonstrate the capacity and performance benefits of Compute Express LinkTM...
MemVerge Names Jeffrey Wang Vice President of Engineering
Software-Defined Innovation Visionary Joins MemVerge to Lead Engineering Team MILPITAS, Calif. – January 18, 2023 – MemVerge®, pioneers of Big Memory software, today announced that Jeffrey Wang has been named Vice President of Engineering. With decades of...
MemVerge is First to Offer Software-Defined Memory Management for Intel Xeon Scalable Processors and Compute Express Link (CXL)
MemVerge Memory Machine Delivers Software-Defined Compute Express Link (CXL) Memory Management for the Dynamic Tiering of Platform Memory MILPITAS, Calif. – January 10, 2023 – MemVerge®, pioneers of Big Memory software, today announced that MemVerge Memory Machine™ is...
