| 시간 제한 | 메모리 제한 | 제출 | 정답 | 맞힌 사람 | 정답 비율 |
|---|---|---|---|---|---|
| 4 초 | 256 MB | 16 | 2 | 2 | 22.222% |
Lately, Byton has found interest in the science describing methods of teaching computers identifying patterns in data and drawing conclusions from them -- the machine learning.
During his research in this field, he had to investigate properties of some complicated function $f$. He computed its value in a number of points $x_1, x_2 \dots, x_n,ドル obtaining results $y_1, y_2, \dots, y_n$.
He would like to approximate $f$ by some continuous function $g,ドル composed of two linear parts; formally for some $x \in \mathbb{R},ドル $g$ should be linear for arguments less than $x$ and linear for arguments greater than $x$.
Byton would like to achieve a faithful approximation of $f$. He would like to minimize the mean squared error:
\[\frac1n \sum_{i=1}^n (y_i - g(x_i))^2.\]
The first line of the input contains a single integer $n$ (1ドル \le n \le 100,000円$). Each of the next $n$ lines contain two integers $x_i, y_i$ (0ドル \le x_i \le 1,000円,000円,ドル 0ドル \le y_i \le 1000$). The numbers $x_i$ are pairwise different.
You should print a single real number -- the minimum possible mean squared error he is able to achieve.
Your answer will be accepted if its absolute error does not exceed 1ドル$.
5 0 1 2 0 1 3 4 4 3 2
0.8333333333333
7 0 0 1 1 2 2 3 4 4 2 5 1 6 0
0.0659340659341
In the first example, the optimal mean squared error is $\frac56$. You can get it by fixing on the left the linear function $-\frac{x}{2} + \frac{11}6$ and on the right, the linear function 2ドルx-4$.
In the second example the minimum mean squared error is $\frac{6}{91}$. The function can be constructed from lines $\frac{16}{13}x - \frac2{13}$ and $-\frac{16}{13}x + \frac{94}{13}$.