Learning augmented algorithm
A learning augmented algorithm is an algorithm that can make use of a prediction to improve its performance.[1] Whereas in regular algorithms just the problem instance is inputted, learning augmented algorithms accept an extra parameter. This extra parameter often is a prediction of some property of the solution. This prediction is then used by the algorithm to improve its running time or the quality of its output.
Description
[edit ]A learning augmented algorithm typically takes an input {\displaystyle ({\mathcal {I}},{\mathcal {A}})}. Here {\displaystyle {\mathcal {I}}} is a problem instance and {\displaystyle {\mathcal {A}}} is the advice: a prediction about a certain property of the optimal solution. The type of the problem instance and the prediction depend on the algorithm. Learning augmented algorithms usually satisfy the following two properties:
- Consistency. A learning augmented algorithm is said to be consistent if the algorithm can be proven to have a good performance when it is provided with an accurate prediction.[1] Usually, this is quantified by giving a bound on the performance that depends on the error in the prediction.
- Robustnesss. An algorithm is called robust if its worst-case performance can be bounded even if the given prediction is inaccurate.[1]
Learning augmented algorithms generally do not prescribe how the prediction should be done. For this purpose machine learning can be used.[citation needed ]
Examples
[edit ]Binary search
[edit ]The binary search algorithm is an algorithm for finding elements of a sorted list {\displaystyle x_{1},\ldots ,x_{n}}. It needs {\displaystyle O(\log(n))} steps to find an element with some known value {\displaystyle y} in a list of length {\displaystyle n}. With a prediction {\displaystyle i} for the position of {\displaystyle y}, the following learning augmented algorithm can be used.[1]
- First, look at position {\displaystyle i} in the list. If {\displaystyle x_{i}=y}, the element has been found.
- If {\displaystyle x_{i}<y}, look at positions {\displaystyle i+1,i+2,i+4,\ldots } until an index {\displaystyle j} with {\displaystyle x_{j}\geq y} is found.
- Now perform a binary search on {\displaystyle x_{i},\ldots ,x_{j}}.
- If {\displaystyle x_{i}>y}, do the same as in the previous case, but instead consider {\displaystyle i-1,i-2,i-4,\ldots }.
The error is defined to be {\displaystyle \eta =|i-i^{*}|}, where {\displaystyle i^{*}} is the real index of {\displaystyle y}. In the learning augmented algorithm, probing the positions {\displaystyle i+1,i+2,i+4,\ldots } takes {\displaystyle \log _{2}(\eta )} steps. Then a binary search is performed on a list of size at most {\displaystyle 2\eta }, which takes {\displaystyle \log _{2}(\eta )} steps. This makes the total running time of the algorithm {\displaystyle 2\log _{2}(\eta )}. So, when the error is small, the algorithm is faster than a normal binary search. This shows that the algorithm is consistent. Even in the worst case, the error will be at most {\displaystyle n}. Then the algorithm takes at most {\displaystyle O(\log(n))} steps, so the algorithm is robust.
More examples
[edit ]Learning augmented algorithms are known for:
- The ski rental problem [2]
- The maximum weight matching problem[3]
- The weighted paging problem [4]
See also
[edit ]References
[edit ]- ^ a b c d Mitzenmacher, Michael; Vassilvitskii, Sergei (31 December 2020). "Algorithms with Predictions". Beyond the Worst-Case Analysis of Algorithms. Cambridge University Press. pp. 646–662. arXiv:2006.09123 . doi:10.1017/9781108637435.037. ISBN 978-1-108-63743-5.
- ^ Wang, Shufan; Li, Jian; Wang, Shiqiang (2020). "Online Algorithms for Multi-shop Ski Rental with Machine Learned Advice". NIPS'20: Proceedings of the 34th International Conference on Neural Information Processing Systems. arXiv:2002.05808 . ISBN 978-1-71382-954-6. OCLC 1263313383.
- ^ Dinitz, Michael; Im, Sungjin; Lavastida, Thomas; Benjamin, Benjamin; Vassilvitskii, Sergei (2021). "Faster Matchings via Learned Duals". Advances in Neural Information Processing Systems (PDF). Curran Associates, Inc.
- ^ Bansal, Nikhil; Coester, Christian; Kumar, Ravi; Purohit, Manish; Vee, Erik (January 2022). "Learning-Augmented Weighted Paging". Proceedings of the 2022 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA). Society for Industrial and Applied Mathematics. pp. 67–89. doi:10.1137/1.9781611977073.4. ISBN 978-1-61197-707-3.