|
| 1 | +# 146. LRU Cache |
| 2 | + |
| 3 | +**<font color=red>难度: Hard</font>** |
| 4 | + |
| 5 | +## 刷题内容 |
| 6 | + |
| 7 | +> 原题连接 |
| 8 | + |
| 9 | +* https://leetcode.com/problems/lru-cache/description/ |
| 10 | + |
| 11 | +> 内容描述 |
| 12 | + |
| 13 | +``` |
| 14 | +Design and implement a data structure for Least Recently Used (LRU) cache. It should support the following operations: get and put. |
| 15 | + |
| 16 | +get(key) - Get the value (will always be positive) of the key if the key exists in the cache, otherwise return -1. |
| 17 | +put(key, value) - Set or insert the value if the key is not already present. When the cache reached its capacity, it should invalidate the least recently used item before inserting a new item. |
| 18 | + |
| 19 | +Follow up: |
| 20 | +Could you do both operations in O(1) time complexity? |
| 21 | + |
| 22 | +Example: |
| 23 | + |
| 24 | +LRUCache cache = new LRUCache( 2 /* capacity */ ); |
| 25 | + |
| 26 | +cache.put(1, 1); |
| 27 | +cache.put(2, 2); |
| 28 | +cache.get(1); // returns 1 |
| 29 | +cache.put(3, 3); // evicts key 2 |
| 30 | +cache.get(2); // returns -1 (not found) |
| 31 | +cache.put(4, 4); // evicts key 1 |
| 32 | +cache.get(1); // returns -1 (not found) |
| 33 | +cache.get(3); // returns 3 |
| 34 | +cache.get(4); // returns 4 |
| 35 | +``` |
| 36 | + |
| 37 | +## 解题方案 |
| 38 | + |
| 39 | +> 思路 1 |
| 40 | +******- 时间复杂度: O(1)******- 空间复杂度: O(N)****** |
| 41 | + |
| 42 | + |
| 43 | +LRU cache 相当于要维护一个跟时间顺序相关的数据结构 |
| 44 | + |
| 45 | +那么能找到最早更新元素的数据结构有 queue,heap和LinkedList这几种 |
| 46 | + |
| 47 | +1. 首先,我们需要能够快速的访问到指点的元素,这一点LinkedList要用O(n)遍历,但是我们可以通过一个字典来对应key和node的信息,这样就是O(1)了 |
| 48 | +2. 其次,由于要随时插入和删除找到的node,双向链表 doubly LinkedList 显然更好一些 |
| 49 | + |
| 50 | +然后我们可以开始想接下来的逻辑了 |
| 51 | + |
| 52 | +1. LRU Cache里面维护一个cache字典对应key和node的信息,一个cap表示最大容量,一个双向链表,其中head.next是most recently的node, |
| 53 | +tail.prev是least recently的node(即容量满了被删除的那个node) |
| 54 | +2. 对于get方法 |
| 55 | + - 1. 如果key在cache字典中,说明node在链表中 |
| 56 | + - 根据key从cache字典中拿到对应的node,删除这个node,再重新插入这个node(插入逻辑包含了更新到最新的位置) |
| 57 | + - 2. 如果不在直接返回 -1 |
| 58 | +3. 对于put方法 |
| 59 | + - 1. 如果key在cache字典中,说明node在链表中 |
| 60 | + - 根据key从cache字典中拿到对应的node,删除这个node,再重新插入这个node(插入逻辑包含了更新到最新的位置) |
| 61 | + - 2. 如果key不在cache字典中,说明是一个新的node |
| 62 | + - 如果此时容量还没满的话: |
| 63 | + - 生成新node,插入链表中,放入cache中 |
| 64 | + - 如果此时容量满了的话: |
| 65 | + - 从链表中删除tail.prev,即least recently的node |
| 66 | + - 从cache中删除这个node的信息 |
| 67 | + - 生成新node,插入链表中,放入cache中 |
| 68 | + |
| 69 | + |
| 70 | +下面是AC代码,其中逻辑3中如果key不在cache字典中的代码可以优化,生成新node,插入链表中,放入cache中这一步是重复的 |
| 71 | + |
| 72 | + |
| 73 | + |
| 74 | + |
| 75 | + |
| 76 | + |
| 77 | + |
| 78 | + |
| 79 | + |
| 80 | + |
| 81 | + |
| 82 | + |
| 83 | + |
| 84 | +```python |
| 85 | +class Node(object): |
| 86 | + |
| 87 | + def __init__(self, key, val): |
| 88 | + self.key = key |
| 89 | + self.val = val |
| 90 | + self.next = None |
| 91 | + self.prev = None |
| 92 | + |
| 93 | + |
| 94 | +class LRUCache(object): |
| 95 | + def __init__(self, capacity): |
| 96 | + """ |
| 97 | + :type capacity: int |
| 98 | + """ |
| 99 | + self.cache = {} |
| 100 | + self.cap = capacity |
| 101 | + self.head = Node(None, None) |
| 102 | + self.tail = Node(None, None) |
| 103 | + self.head.next = self.tail |
| 104 | + self.tail.prev = self.head |
| 105 | + |
| 106 | + |
| 107 | + def remove(self, node): |
| 108 | + n = node.next |
| 109 | + p = node.prev |
| 110 | + p.next = n |
| 111 | + n.prev = p |
| 112 | + node.next = None |
| 113 | + node.prev = None |
| 114 | + |
| 115 | + |
| 116 | + def insert(self, node): |
| 117 | + n = self.head.next |
| 118 | + self.head.next = node |
| 119 | + node.next = n |
| 120 | + n.prev = node |
| 121 | + node.prev = self.head |
| 122 | + |
| 123 | + |
| 124 | + def get(self, key): |
| 125 | + """ |
| 126 | + :type key: int |
| 127 | + :rtype: int |
| 128 | + """ |
| 129 | + if key in self.cache: |
| 130 | + node = self.cache[key] |
| 131 | + self.remove(node) |
| 132 | + self.insert(node) |
| 133 | + return node.val |
| 134 | + else: |
| 135 | + return -1 |
| 136 | + |
| 137 | + |
| 138 | + def put(self, key, value): |
| 139 | + """ |
| 140 | + :type key: int |
| 141 | + :type value: int |
| 142 | + :rtype: void |
| 143 | + """ |
| 144 | + if key in self.cache: |
| 145 | + node = self.cache[key] |
| 146 | + node.val = value |
| 147 | + self.remove(node) |
| 148 | + self.insert(node) |
| 149 | + else: |
| 150 | + if len(self.cache) >= self.cap: |
| 151 | + delete_node = self.tail.prev |
| 152 | + del self.cache[delete_node.key] |
| 153 | + self.remove(delete_node) |
| 154 | + node = Node(key, value) |
| 155 | + self.insert(node) |
| 156 | + self.cache[key] = node |
| 157 | + |
| 158 | + |
| 159 | +# Your LRUCache object will be instantiated and called as such: |
| 160 | +# obj = LRUCache(capacity) |
| 161 | +# param_1 = obj.get(key) |
| 162 | +# obj.put(key,value) |
| 163 | +``` |
| 164 | + |
| 165 | + |
| 166 | + |
| 167 | + |
| 168 | + |
| 169 | + |
0 commit comments