Steps to reproduce the problem
Create a database with memory-optimized filegroup and container Create schema only in-memory table with nonclustered pk Simulate insert and delete activity. My result is that I have high index unused memory that won't go down.
USE master
go
DROP DATABASE IF EXISTS MemoryOptimizedTest
CREATE DATABASE MemoryOptimizedTest
GO
USE MemoryOptimizedTest
GO
ALTER DATABASE MemoryOptimizedTest
ADD FILEGROUP imoltp_mod CONTAINS MEMORY_OPTIMIZED_DATA
GO
ALTER DATABASE MemoryOptimizedTest ADD FILE (name='imoltp_mod1', filename='c:\imoltp_mod1') TO FILEGROUP imoltp_mod
GO
DROP TABLE IF EXISTS dbo.MyCache
CREATE TABLE dbo.MyCache
(
PK int NOT NULL,
SecondInt int NOT NULL,
ThirdInt int NOT NULL,
CONSTRAINT PK_MyCache PRIMARY KEY NONCLUSTERED (PK)
) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_ONLY)
go
/* Generate activity and monitor table size */
USE MemoryOptimizedTest
go
SELECT
object_id,
OBJECT_SCHEMA_NAME(object_id) + '.' + OBJECT_NAME(object_id) AS Table_Name,
memory_allocated_for_table_kb,
memory_used_by_table_kb,
memory_allocated_for_indexes_kb,
memory_used_by_indexes_kb
FROM sys.dm_db_xtp_table_memory_stats
WHERE OBJECT_ID = OBJECT_ID('dbo.MyCache')
;WITH
L0 AS(SELECT 1 AS c UNION ALL SELECT 1),
L1 AS(SELECT 1 AS c FROM L0 CROSS JOIN L0 AS B),
L2 AS(SELECT 1 AS c FROM L1 CROSS JOIN L1 AS B),
L3 AS(SELECT 1 AS c FROM L2 CROSS JOIN L2 AS B),
L4 AS(SELECT 1 AS c FROM L3 CROSS JOIN L3 AS B),
L5 AS(SELECT 1 AS c FROM L4 CROSS JOIN L4 AS B),
Nums AS(SELECT ROW_NUMBER() OVER(ORDER BY (SELECT NULL)) AS n FROM L5)
, tally AS (SELECT TOP (10000) n FROM Nums ORDER BY n)
INSERT INTO dbo.MyCache (PK, SecondInt, ThirdInt)
SELECT
n
, n+1
, n+2
FROM tally
WAITFOR DELAY '00:00:02'
DELETE FROM dbo.MyCache
GO 50
When I run it on my localmachine Microsoft SQL Server 2017 (RTM-GDR) (KB4505224) - 14.0.2027.2 (X64) Developer Edition with 16 GB Max Memory and 1.5 GB Available memory, the memory_allocated_for_indexes_kb fluctuates normally.
When I run it on our DEV environment Microsoft SQL Server 2019 (RTM-CU7) (KB4570012) - 15.0.4063.15 (X64) Enterprise edition 2 TB Max Memory, 220 GB Available memory
The memory_allocated_for_indexes_kb only grows. I've simulated activity for a table for few hours and have index used memory = 0.24 MB, Index Unused Memory = 385 MB and it won't go down.
The garbage collector ran according to PerfMon Sweep expired rows removed/sec in XTP Garbage collection.
I read somewhere that the garbage collector doesn't free up space until it faces memory pressure but it seems weird that it would hold so much unused memory.
EDIT: I used Resource pool for the database holding in-memory tables. One percent is the lowest I could go. I filled the memory with an other table taking 99% of the resource pool and the memory_allocated_for_indexes still won't go down. Whether there is a transactional activity or not. No active trasnactions are blocking GC and I've waited for more than 30 minutes.
I can't accept any of the current answers as my problem is still not answered. Note that the table rows are cleaned up, but not for the index.
3 Answers 3
I read somewhere that the garbage collector doesn't free up space until it faces memory pressure but it seems weird that it would hold so much unused memory.
Is it weird, though? Why do garbage collection now when it can be put off until later :)
From the documentation, a confirmation of your thought (emphasis mine):
After a user transaction commits, it identifies all queued items associated with the scheduler it ran on and then releases the memory. If the garbage collection queue on the scheduler is empty, it searches for any non-empty queue in the current NUMA node. If there is low transactional activity and there is memory pressure, the main garbage-collection thread can access garbage collect rows from any queue. If there is no transactional activity after (for example) deleting a large number of rows and there is no memory pressure, the deleted rows will not be garbage collected until the transactional activity resumes or there is memory pressure.
Your test workload seems quite small and I wouldn't be surprised if it isn't enough to trigger any collection, especially with how robust that system's memory resources are.
With 2TB of memory, I wouldn't sweat < 500MB of memory usage for this purpose, nor would I worry about this until it became an actual problem.
-
I mean I gotta test somewhere before I put it on production. Relying it would free up upon memory pressure is risky. Can I test it somehow now? For example trace flag or query hint? I don't want to change max memory settings as that would influence others.Zikato– Zikato2020年09月14日 15:21:59 +00:00Commented Sep 14, 2020 at 15:21
-
@NedOtter I meant it as lowering the max memory to create a memory pressure and see the memory reclaimed. Resource governor is a great idea, I didn't think of this. My issue is that a table that uses a 1 MB of memory can grow 400x times its size and is still not garbage collected (even though I saw XTP Garbage collection in PerfMon).Zikato– Zikato2020年09月15日 19:08:08 +00:00Commented Sep 15, 2020 at 19:08
You're running Enterprise, so you should create a resource pool so that the buffer pool is not impacted by in-mem growth.
Details here:
http://nedotter.com/archive/2016/12/monitoring-in-memory-oltp-resource-pools/
I'm still running into the "memory leak" issue on both SQL 2019 and 2022. It only happens for a range index (not a hash index).
But now we have the system procedure sys.sp_xtp_force_gc that can force the garbage collection and release the memory.
EXEC sys.sp_xtp_force_gc N'MemoryOptimizedTest'
The documentation also says:
Prior to SQL Server 2022 (16.x), execute this stored procedure twice.
Explore related questions
See similar questions with these tags.